Patents by Inventor Gurvinder Singh Chhabra

Gurvinder Singh Chhabra has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11868244
    Abstract: A compressed memory system of a processor-based system includes a memory partitioning circuit for partitioning a memory region into data regions with different priority levels. The system also includes a cache line selection circuit for selecting a first cache line from a high priority data region and a second cache line from a low priority data region. The system also includes a compression circuit for compressing the cache lines to obtain a first and a second compressed cache line. The system also includes a cache line packing circuit for packing the compressed cache lines such that the first compressed cache line is written to a first predetermined portion and the second cache line or a portion of the second compressed cache line is written to a second predetermined portion of the candidate compressed cache line. The first predetermined portion is larger than the second predetermined portion.
    Type: Grant
    Filed: January 10, 2022
    Date of Patent: January 9, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Norris Geng, Richard Senior, Gurvinder Singh Chhabra, Kan Wang
  • Patent number: 11829292
    Abstract: A compressed memory system includes a memory region that includes cache lines having priority levels. The compressed memory system also includes a compressed memory region that includes compressed cache lines. Each compressed cache line includes a first set of data bits configured to hold, in a first direction, either a portion of a first cache line or a portion of the first cache line after compression, the first cache line having a first priority level. Each compressed cache line also includes a second set of data bits configured to hold, in a second direction opposite to the first direction, either a portion of a second cache line or a portion of the second cache line after compression, the second cache line having a priority level lower than the first priority level. The first set of data bits includes a greater number of bits than the second set of data bits.
    Type: Grant
    Filed: January 10, 2022
    Date of Patent: November 28, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Norris Geng, Richard Senior, Gurvinder Singh Chhabra, Kan Wang
  • Patent number: 11782762
    Abstract: A method of managing a stack includes detecting, by a stack manager of a processor, that a size of a frame to be allocated exceeds available space of a first stack. The first stack is used by a particular task executing at the processor. The method also includes designating a second stack for use by the particular task. The method further includes copying metadata associated with the first stack to the second stack. The metadata enables the stack manager to transition from the second stack to the first stack upon detection that the second stack is no longer in use by the particular task. The method also includes allocating the frame in the second stack.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: October 10, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Richard Senior, Sundeep Kushwaha, Harsha Gordhan Jagasia, Christopher Ahn, Gurvinder Singh Chhabra, Nieyan Geng, Maksim Krasnyanskiy, Unni Prasad
  • Publication number: 20230236961
    Abstract: A compressed memory system of a processor-based system includes a memory partitioning circuit for partitioning a memory region into data regions with different priority levels. The system also includes a cache line selection circuit for selecting a first cache line from a high priority data region and a second cache line from a low priority data region. The system also includes a compression circuit for compressing the cache lines to obtain a first and a second compressed cache line. The system also includes a cache line packing circuit for packing the compressed cache lines such that the first compressed cache line is written to a first predetermined portion and the second cache line or a portion of the second compressed cache line is written to a second predetermined portion of the candidate compressed cache line. The first predetermined portion is larger than the second predetermined portion.
    Type: Application
    Filed: January 10, 2022
    Publication date: July 27, 2023
    Inventors: Norris GENG, Richard SENIOR, Gurvinder Singh CHHABRA, Kan WANG
  • Publication number: 20230236979
    Abstract: A compressed memory system includes a memory region that includes cache lines having priority levels. The compressed memory system also includes a compressed memory region that includes compressed cache lines. Each compressed cache line includes a first set of data bits configured to hold, in a first direction, either a portion of a first cache line or a portion of the first cache line after compression, the first cache line having a first priority level. Each compressed cache line also includes a second set of data bits configured to hold, in a second direction opposite to the first direction, either a portion of a second cache line or a portion of the second cache line after compression, the second cache line having a priority level lower than the first priority level. The first set of data bits includes a greater number of bits than the second set of data bits.
    Type: Application
    Filed: January 10, 2022
    Publication date: July 27, 2023
    Inventors: Norris GENG, Richard SENIOR, Gurvinder Singh CHHABRA, Kan WANG
  • Patent number: 11687461
    Abstract: A compressed memory system includes a memory region that includes cache lines having priority levels. The compressed memory system also includes a compressed memory region that includes compressed cache lines. Each compressed cache line includes a first set of data bits configured to hold, in a first direction, either a portion of a first cache line or a portion of the first cache line after compression, the first cache line having a first priority level. Each compressed cache line also includes a second set of data bits configured to hold, in a second direction opposite to the first direction, either a portion of a second cache line or a portion of the second cache line after compression, the second cache line having a priority level lower than the first priority level. The first set of data bits includes a greater number of bits than the second set of data bits.
    Type: Grant
    Filed: January 10, 2022
    Date of Patent: June 27, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Norris Geng, Richard Senior, Gurvinder Singh Chhabra, Kan Wang
  • Patent number: 11416236
    Abstract: Embodiments of the present disclosure include systems and methods for efficient over-the-air updating of firmware having compressed and uncompressed segments. The method includes receiving a first update to the firmware via a radio, wherein the first update includes a first uncompressed segment and a first compressed segment, receiving a second update to the firmware, wherein the second update corresponds to the first compressed segment, compressing the second update to generate a compressed second update, applying the first update to the firmware, and applying the compressed second update to the firmware to generate an updated firmware.
    Type: Grant
    Filed: July 5, 2018
    Date of Patent: August 16, 2022
    Assignee: Qualcomm Incorporated
    Inventors: Nieyan Geng, Gurvinder Singh Chhabra, Chenyang Liu, Chuguang He
  • Publication number: 20200272520
    Abstract: A method of managing a stack includes detecting, by a stack manager of a processor, that a size of a frame to be allocated exceeds available space of a first stack. The first stack is used by a particular task executing at the processor. The method also includes designating a second stack for use by the particular task. The method further includes copying metadata associated with the first stack to the second stack. The metadata enables the stack manager to transition from the second stack to the first stack upon detection that the second stack is no longer in use by the particular task. The method also includes allocating the frame in the second stack.
    Type: Application
    Filed: February 26, 2020
    Publication date: August 27, 2020
    Inventors: Richard SENIOR, Sundeep KUSHWAHA, Harsha Gordhan JAGASIA, Christopher AHN, Gurvinder Singh CHHABRA, Nieyan GENG, Maksim KRASNYANSKIY, UNNI PRASAD
  • Patent number: 10678705
    Abstract: Various embodiments include methods and devices for implementing external paging and swapping for dynamic modules on a computing device. Embodiments may include assigning static virtual addresses to a base image and dynamic modules of a static image of firmware of the computing device from a virtual address space for the static image, decompose static image into the base image and the dynamic modules, load the base image to an execution memory during a boot time from first partition of a storage memory, reserve a swap pool in the execution memory during the boot time, and load a dynamic module of the dynamic modules to the swap pool from a second partition of storage memory during a run time.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: June 9, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Nieyan Geng, Gurvinder Singh Chhabra, Caoye Shen, Samir Thakkar, Chuguang He
  • Publication number: 20200089616
    Abstract: Various embodiments include methods and devices for implementing external paging and swapping for dynamic modules on a computing device. Embodiments may include assigning static virtual addresses to a base image and dynamic modules of a static image of firmware of the computing device from a virtual address space for the static image, decompose static image into the base image and the dynamic modules, load the base image to an execution memory during a boot time from first partition of a storage memory, reserve a swap pool in the execution memory during the boot time, and load a dynamic module of the dynamic modules to the swap pool from a second partition of storage memory during a run time.
    Type: Application
    Filed: September 13, 2018
    Publication date: March 19, 2020
    Inventors: Nieyan GENG, Gurvinder Singh Chhabra, Caoye Shen, Samir Thakkar, Chuguang He
  • Patent number: 10482021
    Abstract: In an aspect, high priority lines are stored starting at an address aligned to a cache line size for instance 64 bytes, and low priority lines are stored in memory space left by the compression of high priority lines. The space left by the high priority lines and hence the low priority lines themselves are managed through pointers also stored in memory. In this manner, low priority lines contents can be moved to different memory locations as needed. The efficiency of higher priority compressed memory accesses is improved by removing the need for indirection otherwise required to find and access compressed memory lines, this is especially advantageous for immutable compressed contents. The use of pointers for low priority is advantageous due to the full flexibility of placement, especially for mutable compressed contents that may need movement within memory for instance as it changes in size over time.
    Type: Grant
    Filed: June 24, 2016
    Date of Patent: November 19, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Andres Alejandro Oportus Valenzuela, Nieyan Geng, Christopher Edward Koob, Gurvinder Singh Chhabra, Richard Senior, Anand Janakiraman
  • Publication number: 20190303158
    Abstract: Systems and methods for branch prediction include identifying a subset of branch instructions executable by a processor as a neural subset of branch instructions, based on information obtained from using an execution trace, wherein the neural subset of branch instructions are determined to have larger benefit from a neural branch predictor than a non-neural branch predictor. The neural branch predictor is pre-trained for the neural subset based on the execution trace. Annotations are added to the neural subset of branch instructions, wherein the annotations are preserved across software revisions. At runtime, when the neural subset of branch instructions are encountered during any future software revision, the branch instructions thereof are detected as belonging to the neural subset of branch instructions based on the annotations, and the pre-trained neural branch predictor is used for making their branch predictions.
    Type: Application
    Filed: March 29, 2018
    Publication date: October 3, 2019
    Inventors: Gurkanwal BRAR, Christopher AHN, Gurvinder Singh CHHABRA
  • Patent number: 10372459
    Abstract: Systems and methods for branch prediction include identifying a subset of branch instructions from an execution trace of instructions executed by a processor. The identified subset of branch instructions have greater benefit from branch predictions made by a neural branch predictor than branch predictions made by a non-neural branch predictor. During runtime, the neural branch predictor is selectively used for obtaining branch predictions of the identified subset of branch instructions. For remaining branch instructions outside the identified subset of branch instructions, branch predictions are obtained from a non-neural branch predictor. Further, a weight vector matrix comprising weight vectors for the identified subset of branch instructions of the neural branch predictor is pre-trained based on the execution trace.
    Type: Grant
    Filed: September 21, 2017
    Date of Patent: August 6, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Gurkanwal Brar, Christopher Ahn, Gurvinder Singh Chhabra
  • Publication number: 20190087193
    Abstract: Systems and methods for branch prediction include identifying a subset of branch instructions from an execution trace of instructions executed by a processor. The identified subset of branch instructions have greater benefit from branch predictions made by a neural branch predictor than branch predictions made by a non-neural branch predictor. During runtime, the neural branch predictor is selectively used for obtaining branch predictions of the identified subset of branch instructions. For remaining branch instructions outside the identified subset of branch instructions, branch predictions are obtained from a non-neural branch predictor. Further, a weight vector matrix comprising weight vectors for the identified subset of branch instructions of the neural branch predictor is pre-trained based on the execution trace.
    Type: Application
    Filed: September 21, 2017
    Publication date: March 21, 2019
    Inventors: Gurkanwal BRAR, Christopher AHN, Gurvinder Singh CHHABRA
  • Patent number: 10198362
    Abstract: Reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems is disclosed. In this regard, a memory system including a compression circuit is provided. The compression circuit includes a compress circuit that is configured to cache free memory lists using free memory list caches comprising a plurality of buffers. When a number of pointers cached within the free memory list cache falls below a low threshold value, an empty buffer of the plurality of buffers is refilled from a system memory. In some aspects, when a number of pointers of the free memory list cache exceeds a high threshold value, a full buffer of the free memory list cache is emptied to the system memory. In this manner, memory access operations for emptying and refilling the free memory list cache may be minimized.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: February 5, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
  • Publication number: 20190012164
    Abstract: Embodiments of the present disclosure include systems and methods for efficient over-the-air updating of firmware having compressed and uncompressed segments. The method includes receiving a first update to the firmware via a radio, wherein the first update includes a first uncompressed segment and a first compressed segment, receiving a second update to the firmware, wherein the second update corresponds to the first compressed segment, compressing the second update to generate a compressed second update, applying the first update to the firmware, and applying the compressed second update to the firmware to generate an updated firmware.
    Type: Application
    Filed: July 5, 2018
    Publication date: January 10, 2019
    Inventors: Nieyan GENG, Gurvinder Singh CHHABRA, Chenyang LIU, Chuguang HE
  • Patent number: 10169246
    Abstract: Reducing metadata size in compressed memory systems of processor-based systems is disclosed. In one aspect, a compressed memory system provides 2N compressed data regions, corresponding 2N sets of free memory lists, and a metadata circuit. The metadata circuit associates virtual addresses with abbreviated physical addresses, which omit N upper bits of corresponding full physical addresses, of memory blocks of the 2N compressed data regions. A compression circuit of the compressed memory system receives a memory access request including a virtual address, and selects one of the 2N compressed data regions and one of the 2N sets of free memory lists based on a modulus of the virtual address and 2N. The compression circuit retrieves an abbreviated physical address corresponding to the virtual address from the metadata circuit, and performs a memory access operation on a memory block associated with the abbreviated physical address in the selected compressed data region.
    Type: Grant
    Filed: May 11, 2017
    Date of Patent: January 1, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
  • Publication number: 20180329830
    Abstract: Reducing metadata size in compressed memory systems of processor-based systems is disclosed. In one aspect, a compressed memory system provides 2N compressed data regions, corresponding 2N sets of free memory lists, and a metadata circuit. The metadata circuit associates virtual addresses with abbreviated physical addresses, which omit N upper bits of corresponding full physical addresses, of memory blocks of the 2N compressed data regions. A compression circuit of the compressed memory system receives a memory access request including a virtual address, and selects one of the 2N compressed data regions and one of the 2N sets of free memory lists based on a modulus of the virtual address and 2N. The compression circuit retrieves an abbreviated physical address corresponding to the virtual address from the metadata circuit, and performs a memory access operation on a memory block associated with the abbreviated physical address in the selected compressed data region.
    Type: Application
    Filed: May 11, 2017
    Publication date: November 15, 2018
    Inventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
  • Patent number: 10061698
    Abstract: Aspects disclosed involve reducing or avoiding buffering of evicted cache data from an uncompressed cache memory in a compression memory system when stalled write operations occur. A processor-based system is provided that includes a cache memory and a compression memory system. When a cache entry is evicted from the cache memory, cache data and a virtual address associated with the evicted cache entry are provided to the compression memory system. The compression memory system reads metadata associated with the virtual address of the evicted cache entry to determine the physical address in the compression memory system mapped to the evicted cache entry. If the metadata is not available, the compression memory system stores the evicted cache data at a new, available physical address in the compression memory system without waiting for the metadata. Thus, buffering of the evicted cache data to avoid or reduce stalling write operations is not necessary.
    Type: Grant
    Filed: January 31, 2017
    Date of Patent: August 28, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Christopher Edward Koob, Richard Senior, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
  • Publication number: 20180225224
    Abstract: Reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems is disclosed. In this regard, a memory system including a compression circuit is provided. The compression circuit includes a compress circuit that is configured to cache free memory lists using free memory list caches comprising a plurality of buffers. When a number of pointers cached within the free memory list cache falls below a low threshold value, an empty buffer of the plurality of buffers is refilled from a system memory. In some aspects, when a number of pointers of the free memory list cache exceeds a high threshold value, a full buffer of the free memory list cache is emptied to the system memory. In this manner, memory access operations for emptying and refilling the free memory list cache may be minimized.
    Type: Application
    Filed: February 7, 2017
    Publication date: August 9, 2018
    Inventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman