Patents by Inventor Anand Janakiraman
Anand Janakiraman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10734117Abstract: Apparatuses (including devices and systems) and methods for determining if a patient will respond to a variety of cancer drugs.Type: GrantFiled: March 2, 2016Date of Patent: August 4, 2020Assignee: STRAND LIFE SCIENCES PRIVATE LIMITEDInventors: Vaijayanti Gupta, Manimala Sen, Satish Sankaran, Kalyanasundaram Subramanian, Ramesh Hariharan, Vamsi Veeramachaneni, Shanmukh Katragadda, Rohit Gupta, Radhakrishna Bettadapura, Anand Janakiraman, Arunabha Ghosh, Smita Agrawal, Sujaya Srinivasan, Bhupender Singh, Urvashi Bahadur, Shuba Krishna, Mahesh Nagarajan, Nimisha Gupta, Sudhir Borgonha
-
Patent number: 10482021Abstract: In an aspect, high priority lines are stored starting at an address aligned to a cache line size for instance 64 bytes, and low priority lines are stored in memory space left by the compression of high priority lines. The space left by the high priority lines and hence the low priority lines themselves are managed through pointers also stored in memory. In this manner, low priority lines contents can be moved to different memory locations as needed. The efficiency of higher priority compressed memory accesses is improved by removing the need for indirection otherwise required to find and access compressed memory lines, this is especially advantageous for immutable compressed contents. The use of pointers for low priority is advantageous due to the full flexibility of placement, especially for mutable compressed contents that may need movement within memory for instance as it changes in size over time.Type: GrantFiled: June 24, 2016Date of Patent: November 19, 2019Assignee: QUALCOMM IncorporatedInventors: Andres Alejandro Oportus Valenzuela, Nieyan Geng, Christopher Edward Koob, Gurvinder Singh Chhabra, Richard Senior, Anand Janakiraman
-
Patent number: 10198362Abstract: Reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems is disclosed. In this regard, a memory system including a compression circuit is provided. The compression circuit includes a compress circuit that is configured to cache free memory lists using free memory list caches comprising a plurality of buffers. When a number of pointers cached within the free memory list cache falls below a low threshold value, an empty buffer of the plurality of buffers is refilled from a system memory. In some aspects, when a number of pointers of the free memory list cache exceeds a high threshold value, a full buffer of the free memory list cache is emptied to the system memory. In this manner, memory access operations for emptying and refilling the free memory list cache may be minimized.Type: GrantFiled: February 7, 2017Date of Patent: February 5, 2019Assignee: QUALCOMM IncorporatedInventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
-
Publication number: 20190006048Abstract: Apparatuses (including devices and systems) and methods for determining if a patient will respond to a variety of cancer drugs.Type: ApplicationFiled: March 2, 2016Publication date: January 3, 2019Inventors: Vaijayanti Gupta, Manimala Sen, Satish Sankaran, Kalyanasundaram Subramanian, Ramesh Hariharan, Vamsi Veeramachaneni, Shanmukh Katragadda, Rohit Gupta, Radhakrishna Bettadapura, Anand Janakiraman, Arunabha Ghosh, Smita Agrawal, Sujaya Srinivasan, Bhupender Singh, Urvashi Bahadur, Shuba Krishna, Mahesh Nagarajan, Preveen Rammoorthy, Harsha K. Rajashimha
-
Patent number: 10169246Abstract: Reducing metadata size in compressed memory systems of processor-based systems is disclosed. In one aspect, a compressed memory system provides 2N compressed data regions, corresponding 2N sets of free memory lists, and a metadata circuit. The metadata circuit associates virtual addresses with abbreviated physical addresses, which omit N upper bits of corresponding full physical addresses, of memory blocks of the 2N compressed data regions. A compression circuit of the compressed memory system receives a memory access request including a virtual address, and selects one of the 2N compressed data regions and one of the 2N sets of free memory lists based on a modulus of the virtual address and 2N. The compression circuit retrieves an abbreviated physical address corresponding to the virtual address from the metadata circuit, and performs a memory access operation on a memory block associated with the abbreviated physical address in the selected compressed data region.Type: GrantFiled: May 11, 2017Date of Patent: January 1, 2019Assignee: QUALCOMM IncorporatedInventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
-
Publication number: 20180329830Abstract: Reducing metadata size in compressed memory systems of processor-based systems is disclosed. In one aspect, a compressed memory system provides 2N compressed data regions, corresponding 2N sets of free memory lists, and a metadata circuit. The metadata circuit associates virtual addresses with abbreviated physical addresses, which omit N upper bits of corresponding full physical addresses, of memory blocks of the 2N compressed data regions. A compression circuit of the compressed memory system receives a memory access request including a virtual address, and selects one of the 2N compressed data regions and one of the 2N sets of free memory lists based on a modulus of the virtual address and 2N. The compression circuit retrieves an abbreviated physical address corresponding to the virtual address from the metadata circuit, and performs a memory access operation on a memory block associated with the abbreviated physical address in the selected compressed data region.Type: ApplicationFiled: May 11, 2017Publication date: November 15, 2018Inventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
-
Patent number: 10061698Abstract: Aspects disclosed involve reducing or avoiding buffering of evicted cache data from an uncompressed cache memory in a compression memory system when stalled write operations occur. A processor-based system is provided that includes a cache memory and a compression memory system. When a cache entry is evicted from the cache memory, cache data and a virtual address associated with the evicted cache entry are provided to the compression memory system. The compression memory system reads metadata associated with the virtual address of the evicted cache entry to determine the physical address in the compression memory system mapped to the evicted cache entry. If the metadata is not available, the compression memory system stores the evicted cache data at a new, available physical address in the compression memory system without waiting for the metadata. Thus, buffering of the evicted cache data to avoid or reduce stalling write operations is not necessary.Type: GrantFiled: January 31, 2017Date of Patent: August 28, 2018Assignee: QUALCOMM IncorporatedInventors: Christopher Edward Koob, Richard Senior, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
-
Publication number: 20180225224Abstract: Reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems is disclosed. In this regard, a memory system including a compression circuit is provided. The compression circuit includes a compress circuit that is configured to cache free memory lists using free memory list caches comprising a plurality of buffers. When a number of pointers cached within the free memory list cache falls below a low threshold value, an empty buffer of the plurality of buffers is refilled from a system memory. In some aspects, when a number of pointers of the free memory list cache exceeds a high threshold value, a full buffer of the free memory list cache is emptied to the system memory. In this manner, memory access operations for emptying and refilling the free memory list cache may be minimized.Type: ApplicationFiled: February 7, 2017Publication date: August 9, 2018Inventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
-
Publication number: 20180217930Abstract: Aspects disclosed involve reducing or avoiding buffering of evicted cache data from an uncompressed cache memory in a compression memory system when stalled write operations occur. A processor-based system is provided that includes a cache memory and a compression memory system. When a cache entry is evicted from the cache memory, cache data and a virtual address associated with the evicted cache entry are provided to the compression memory system. The compression memory system reads metadata associated with the virtual address of the evicted cache entry to determine the physical address in the compression memory system mapped to the evicted cache entry. If the metadata is not available, the compression memory system stores the evicted cache data at a new, available physical address in the compression memory system without waiting for the metadata. Thus, buffering of the evicted cache data to avoid or reduce stalling write operations is not necessary.Type: ApplicationFiled: January 31, 2017Publication date: August 2, 2018Inventors: Christopher Edward Koob, Richard Senior, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
-
Publication number: 20180173623Abstract: Aspects disclosed involve reducing or avoiding buffering evicted cache data from an uncompressed cache memory in a compressed memory system to avoid stalling write operations. Metadata is included in cache entries in the uncompressed cache memory, which is used for mapping cache entries to physical addresses in the compressed memory system. When a cache entry is evicted, the compressed memory system uses the metadata associated with the evicted cache data to determine the physical address in the compressed system memory for storing the evicted cache data. In this manner, the compressed memory system does not have to incur the latency associated with reading the metadata for the evicted cache entry from another memory structure that may otherwise require buffering the evicted cache data until the metadata becomes available, to write the evicted cache data to the compressed system memory to avoid stalling write operations.Type: ApplicationFiled: December 21, 2016Publication date: June 21, 2018Inventors: Christopher Edward Koob, Richard Senior, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
-
Publication number: 20170371792Abstract: In an aspect, high priority lines are stored starting at an address aligned to a cache line size for instance 64 bytes, and low priority lines are stored in memory space left by the compression of high priority lines. The space left by the high priority lines and hence the low priority lines themselves are managed through pointers also stored in memory. In this manner, low priority lines contents can be moved to different memory locations as needed. The efficiency of higher priority compressed memory accesses is improved by removing the need for indirection otherwise required to find and access compressed memory lines, this is especially advantageous for immutable compressed contents.Type: ApplicationFiled: June 24, 2016Publication date: December 28, 2017Inventors: Andres Alejandro OPORTUS VALENZUELA, Nieyan GENG, Christopher Edward KOOB, Gurvinder Singh CHHABRA, Richard SENIOR, Anand JANAKIRAMAN
-
Publication number: 20170371797Abstract: Some aspects of the disclosure relate to a pre-fetch mechanism for a cache line compression system that increases RAM capacity and optimizes overflow area reads. For example, a pre-fetch mechanism may allow the memory controller to pipeline the reads from an area with fixed size slots (main compressed area) and the reads from an overflow area. The overflow area is arranged so that a cache line most likely containing the overflow data for a particular line may be calculated by a decompression engine. In this manner, the cache line decompression engine may fetch, in advance, the overflow area before finding the actual location of the overflow data.Type: ApplicationFiled: June 24, 2016Publication date: December 28, 2017Inventors: Andres Alejandro OPORTUS VALENZUELA, Nieyan GENG, Gurvinder Singh CHHABRA, Richard SENIOR, Anand JANAKIRAMAN
-
Patent number: 9823854Abstract: Aspects disclosed relate to a priority-based access of compressed memory lines in a processor-based system. In an aspect, a memory access device in the processor-based system receives a read access request for memory. If the read access request is higher priority, the memory access device uses the logical memory address of the read access request as the physical memory address to access the compressed memory line. However, if the read access request is lower priority, the memory access device translates the logical memory address of the read access request into one or more physical memory addresses in memory space left by the compression of higher priority lines. In this manner, the efficiency of higher priority compressed memory accesses is improved by removing a level of indirection otherwise required to find and access compressed memory lines.Type: GrantFiled: March 18, 2016Date of Patent: November 21, 2017Assignee: QUALCOMM IncorporatedInventors: Andres Alejandro Oportus Valenzuela, Amin Ansari, Richard Senior, Nieyan Geng, Anand Janakiraman, Gurvinder Singh Chhabra
-
Publication number: 20170269851Abstract: Aspects disclosed relate to a priority-based access of compressed memory lines in a processor-based system. In an aspect, a memory access device in the processor-based system receives a read access request for memory. If the read access request is higher priority, the memory access device uses the logical memory address of the read access request as the physical memory address to access the compressed memory line. However, if the read access request is lower priority, the memory access device translates the logical memory address of the read access request into one or more physical memory addresses in memory space left by the compression of higher priority lines. In this manner, the efficiency of higher priority compressed memory accesses is improved by removing a level of indirection otherwise required to find and access compressed memory lines.Type: ApplicationFiled: March 18, 2016Publication date: September 21, 2017Inventors: Andres Alejandro Oportus Valenzuela, Amin Ansari, Richard Senior, Nieyan Geng, Anand Janakiraman, Gurvinder Singh Chhabra
-
Publication number: 20090215477Abstract: Systems and methodologies are described that facilitate communication between a plurality of devices identified by mobile infospheres. The devices can be associated with a mobile infosphere based on ownership, for example, where the mobile infospheres are identified by a mobile phone number. A registry server can store information regarding devices in each mobile infosphere, and communication between the devices within a mobile infosphere or devices in other mobile infospheres can be facilitated by providing stored access parameters. In addition, data transferred among the devices can be transcoded to meet capabilities of disparate devices with respect to memory, bandwidth, available codes, etc. Moreover, a file system can aggregate shared files and folders from a plurality of mobile infosphere devices to provide seamless access to available accessible content.Type: ApplicationFiled: November 13, 2008Publication date: August 27, 2009Applicant: QUALCOMM, IncorporatedInventors: Thien H. Lee, Anand Janakiraman, Murtuza T. Chhatriwala, Manuel E. Jaime, Mark Kelly Murphy
-
Publication number: 20040203620Abstract: A method and apparatus is disclosed for time stamping an electronic message to accurately associate time information with the electronic message. The time information may identify the time at which the message was sent or processed. In one embodiment the time information comprises a time stamp value, a time offset value, a time convention identifier for the time stamp, and a daylight savings time identifier. These values maybe by appended by a message center prior to sending the message to a message recipient. Upon receipt by a mobile communication device, processing occurs so that an accurate message sent time is provided. In one embodiment a system time, from a communication system, is continually provided to the mobile communication device to thereby allow display of the message sent time value in a format corresponding to the location, i.e. time zone, of the mobile communication device at the time of receipt.Type: ApplicationFiled: October 15, 2002Publication date: October 14, 2004Inventors: Timothy Thome, Anand Janakiraman