Patents by Inventor Richard Senior
Richard Senior has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240403411Abstract: Various embodiments include methods and devices for maintaining control flow integrity in computing devices. Embodiments may include identifying indirect function call candidate functions from a source code by a compiler, replacing, by the compiler, an indirect function call from the source code with a call to a wrapper function, and collocating the indirect function call candidate functions in at least one range of addresses of memory by a linker. The wrapper function may be configured to determine whether an address to be passed to the indirect function call is within the at least one range of addresses of memory.Type: ApplicationFiled: November 17, 2022Publication date: December 5, 2024Inventors: Sundeep KUSHWAHA, Arvind KRISHNASWAMY, Sergei LARIN, Can ACAR, Tianshuo SU, Awanish PANDEY, Richard SENIOR
-
Patent number: 11868244Abstract: A compressed memory system of a processor-based system includes a memory partitioning circuit for partitioning a memory region into data regions with different priority levels. The system also includes a cache line selection circuit for selecting a first cache line from a high priority data region and a second cache line from a low priority data region. The system also includes a compression circuit for compressing the cache lines to obtain a first and a second compressed cache line. The system also includes a cache line packing circuit for packing the compressed cache lines such that the first compressed cache line is written to a first predetermined portion and the second cache line or a portion of the second compressed cache line is written to a second predetermined portion of the candidate compressed cache line. The first predetermined portion is larger than the second predetermined portion.Type: GrantFiled: January 10, 2022Date of Patent: January 9, 2024Assignee: QUALCOMM IncorporatedInventors: Norris Geng, Richard Senior, Gurvinder Singh Chhabra, Kan Wang
-
Patent number: 11829292Abstract: A compressed memory system includes a memory region that includes cache lines having priority levels. The compressed memory system also includes a compressed memory region that includes compressed cache lines. Each compressed cache line includes a first set of data bits configured to hold, in a first direction, either a portion of a first cache line or a portion of the first cache line after compression, the first cache line having a first priority level. Each compressed cache line also includes a second set of data bits configured to hold, in a second direction opposite to the first direction, either a portion of a second cache line or a portion of the second cache line after compression, the second cache line having a priority level lower than the first priority level. The first set of data bits includes a greater number of bits than the second set of data bits.Type: GrantFiled: January 10, 2022Date of Patent: November 28, 2023Assignee: QUALCOMM IncorporatedInventors: Norris Geng, Richard Senior, Gurvinder Singh Chhabra, Kan Wang
-
Patent number: 11782762Abstract: A method of managing a stack includes detecting, by a stack manager of a processor, that a size of a frame to be allocated exceeds available space of a first stack. The first stack is used by a particular task executing at the processor. The method also includes designating a second stack for use by the particular task. The method further includes copying metadata associated with the first stack to the second stack. The metadata enables the stack manager to transition from the second stack to the first stack upon detection that the second stack is no longer in use by the particular task. The method also includes allocating the frame in the second stack.Type: GrantFiled: February 26, 2020Date of Patent: October 10, 2023Assignee: QUALCOMM IncorporatedInventors: Richard Senior, Sundeep Kushwaha, Harsha Gordhan Jagasia, Christopher Ahn, Gurvinder Singh Chhabra, Nieyan Geng, Maksim Krasnyanskiy, Unni Prasad
-
Publication number: 20230236979Abstract: A compressed memory system includes a memory region that includes cache lines having priority levels. The compressed memory system also includes a compressed memory region that includes compressed cache lines. Each compressed cache line includes a first set of data bits configured to hold, in a first direction, either a portion of a first cache line or a portion of the first cache line after compression, the first cache line having a first priority level. Each compressed cache line also includes a second set of data bits configured to hold, in a second direction opposite to the first direction, either a portion of a second cache line or a portion of the second cache line after compression, the second cache line having a priority level lower than the first priority level. The first set of data bits includes a greater number of bits than the second set of data bits.Type: ApplicationFiled: January 10, 2022Publication date: July 27, 2023Inventors: Norris GENG, Richard SENIOR, Gurvinder Singh CHHABRA, Kan WANG
-
Publication number: 20230236961Abstract: A compressed memory system of a processor-based system includes a memory partitioning circuit for partitioning a memory region into data regions with different priority levels. The system also includes a cache line selection circuit for selecting a first cache line from a high priority data region and a second cache line from a low priority data region. The system also includes a compression circuit for compressing the cache lines to obtain a first and a second compressed cache line. The system also includes a cache line packing circuit for packing the compressed cache lines such that the first compressed cache line is written to a first predetermined portion and the second cache line or a portion of the second compressed cache line is written to a second predetermined portion of the candidate compressed cache line. The first predetermined portion is larger than the second predetermined portion.Type: ApplicationFiled: January 10, 2022Publication date: July 27, 2023Inventors: Norris GENG, Richard SENIOR, Gurvinder Singh CHHABRA, Kan WANG
-
Patent number: 11687461Abstract: A compressed memory system includes a memory region that includes cache lines having priority levels. The compressed memory system also includes a compressed memory region that includes compressed cache lines. Each compressed cache line includes a first set of data bits configured to hold, in a first direction, either a portion of a first cache line or a portion of the first cache line after compression, the first cache line having a first priority level. Each compressed cache line also includes a second set of data bits configured to hold, in a second direction opposite to the first direction, either a portion of a second cache line or a portion of the second cache line after compression, the second cache line having a priority level lower than the first priority level. The first set of data bits includes a greater number of bits than the second set of data bits.Type: GrantFiled: January 10, 2022Date of Patent: June 27, 2023Assignee: QUALCOMM IncorporatedInventors: Norris Geng, Richard Senior, Gurvinder Singh Chhabra, Kan Wang
-
Patent number: 11386012Abstract: Various embodiments include methods and devices for generating a memory map configured to map virtual addresses of pages to physical addresses, in which pages of a same size are grouped into regions. The embodiments may include adding a first entry for a first additional page to a first region in the memory map, shifting virtual addresses of the first region to accommodate a shift of virtual addresses of the first region allocated for code by a sub-page granular shift amount, mapping shifted virtual addresses of the first entry for the first additional page to physical address mapped to a first lowest shifted virtually addressed page of the first region, and shifting the virtual addresses of the first region allocated for code by a sub-page granular shift amount, in which the virtual addresses of the first region allocated for code partially shift into the first entry for the first additional page.Type: GrantFiled: March 15, 2021Date of Patent: July 12, 2022Assignee: QUALCOMM IncorporatedInventors: Arvind Krishnaswamy, Richard Senior, Sundeep Kushwaha, Can Acar
-
Publication number: 20200272520Abstract: A method of managing a stack includes detecting, by a stack manager of a processor, that a size of a frame to be allocated exceeds available space of a first stack. The first stack is used by a particular task executing at the processor. The method also includes designating a second stack for use by the particular task. The method further includes copying metadata associated with the first stack to the second stack. The metadata enables the stack manager to transition from the second stack to the first stack upon detection that the second stack is no longer in use by the particular task. The method also includes allocating the frame in the second stack.Type: ApplicationFiled: February 26, 2020Publication date: August 27, 2020Inventors: Richard SENIOR, Sundeep KUSHWAHA, Harsha Gordhan JAGASIA, Christopher AHN, Gurvinder Singh CHHABRA, Nieyan GENG, Maksim KRASNYANSKIY, UNNI PRASAD
-
Patent number: 10482021Abstract: In an aspect, high priority lines are stored starting at an address aligned to a cache line size for instance 64 bytes, and low priority lines are stored in memory space left by the compression of high priority lines. The space left by the high priority lines and hence the low priority lines themselves are managed through pointers also stored in memory. In this manner, low priority lines contents can be moved to different memory locations as needed. The efficiency of higher priority compressed memory accesses is improved by removing the need for indirection otherwise required to find and access compressed memory lines, this is especially advantageous for immutable compressed contents. The use of pointers for low priority is advantageous due to the full flexibility of placement, especially for mutable compressed contents that may need movement within memory for instance as it changes in size over time.Type: GrantFiled: June 24, 2016Date of Patent: November 19, 2019Assignee: QUALCOMM IncorporatedInventors: Andres Alejandro Oportus Valenzuela, Nieyan Geng, Christopher Edward Koob, Gurvinder Singh Chhabra, Richard Senior, Anand Janakiraman
-
Publication number: 20190073223Abstract: Systems and methods for branch prediction include detecting a subset of branch instructions which are not fixed direction branch instructions, and for this subset of branch instructions, utilizing complex branch prediction mechanisms such as a neural branch predictor. Detecting the subset of branch instructions includes using a state machine to determine the branch instructions whose outcomes change between a taken direction and a not-taken direction in separate instances of their execution. For the remaining branch instructions which are fixed direction branch instructions, the complex branch prediction techniques are avoided.Type: ApplicationFiled: September 5, 2017Publication date: March 7, 2019Inventors: Richard SENIOR, Raghuveer RAGHAVENDRA, Gurkanwal BRAR, Arvind GOVINDARAJ
-
Patent number: 10198362Abstract: Reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems is disclosed. In this regard, a memory system including a compression circuit is provided. The compression circuit includes a compress circuit that is configured to cache free memory lists using free memory list caches comprising a plurality of buffers. When a number of pointers cached within the free memory list cache falls below a low threshold value, an empty buffer of the plurality of buffers is refilled from a system memory. In some aspects, when a number of pointers of the free memory list cache exceeds a high threshold value, a full buffer of the free memory list cache is emptied to the system memory. In this manner, memory access operations for emptying and refilling the free memory list cache may be minimized.Type: GrantFiled: February 7, 2017Date of Patent: February 5, 2019Assignee: QUALCOMM IncorporatedInventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
-
Patent number: 10169246Abstract: Reducing metadata size in compressed memory systems of processor-based systems is disclosed. In one aspect, a compressed memory system provides 2N compressed data regions, corresponding 2N sets of free memory lists, and a metadata circuit. The metadata circuit associates virtual addresses with abbreviated physical addresses, which omit N upper bits of corresponding full physical addresses, of memory blocks of the 2N compressed data regions. A compression circuit of the compressed memory system receives a memory access request including a virtual address, and selects one of the 2N compressed data regions and one of the 2N sets of free memory lists based on a modulus of the virtual address and 2N. The compression circuit retrieves an abbreviated physical address corresponding to the virtual address from the metadata circuit, and performs a memory access operation on a memory block associated with the abbreviated physical address in the selected compressed data region.Type: GrantFiled: May 11, 2017Date of Patent: January 1, 2019Assignee: QUALCOMM IncorporatedInventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
-
Publication number: 20180329830Abstract: Reducing metadata size in compressed memory systems of processor-based systems is disclosed. In one aspect, a compressed memory system provides 2N compressed data regions, corresponding 2N sets of free memory lists, and a metadata circuit. The metadata circuit associates virtual addresses with abbreviated physical addresses, which omit N upper bits of corresponding full physical addresses, of memory blocks of the 2N compressed data regions. A compression circuit of the compressed memory system receives a memory access request including a virtual address, and selects one of the 2N compressed data regions and one of the 2N sets of free memory lists based on a modulus of the virtual address and 2N. The compression circuit retrieves an abbreviated physical address corresponding to the virtual address from the metadata circuit, and performs a memory access operation on a memory block associated with the abbreviated physical address in the selected compressed data region.Type: ApplicationFiled: May 11, 2017Publication date: November 15, 2018Inventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
-
Patent number: 10120581Abstract: Aspects for generating compressed data streams with lookback pre-fetch instructions are disclosed. A data compression system is provided and configured to receive and compress an uncompressed data stream as part of a lookback-based compression scheme. The data compression system determines if a current data block was previously compressed. If so, the data compression system is configured to insert a lookback instruction corresponding to the current data block into the compressed data stream. Each lookback instruction includes a lookback buffer index that points to an entry in a lookback buffer where decompressed data corresponding to the data block will be stored during a separate decompression scheme. Once the data blocks have been compressed, the data compression system is configured to move a lookback buffer index of each lookback instruction in the compressed data stream into a lookback pre-fetch instruction located earlier than the corresponding lookback instruction in the compressed data stream.Type: GrantFiled: March 30, 2016Date of Patent: November 6, 2018Assignee: QUALCOMM IncorporatedInventors: Richard Senior, Amin Ansari, Vito Remo Bica, Jinxia Bai
-
Patent number: 10061698Abstract: Aspects disclosed involve reducing or avoiding buffering of evicted cache data from an uncompressed cache memory in a compression memory system when stalled write operations occur. A processor-based system is provided that includes a cache memory and a compression memory system. When a cache entry is evicted from the cache memory, cache data and a virtual address associated with the evicted cache entry are provided to the compression memory system. The compression memory system reads metadata associated with the virtual address of the evicted cache entry to determine the physical address in the compression memory system mapped to the evicted cache entry. If the metadata is not available, the compression memory system stores the evicted cache data at a new, available physical address in the compression memory system without waiting for the metadata. Thus, buffering of the evicted cache data to avoid or reduce stalling write operations is not necessary.Type: GrantFiled: January 31, 2017Date of Patent: August 28, 2018Assignee: QUALCOMM IncorporatedInventors: Christopher Edward Koob, Richard Senior, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
-
Publication number: 20180225224Abstract: Reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems is disclosed. In this regard, a memory system including a compression circuit is provided. The compression circuit includes a compress circuit that is configured to cache free memory lists using free memory list caches comprising a plurality of buffers. When a number of pointers cached within the free memory list cache falls below a low threshold value, an empty buffer of the plurality of buffers is refilled from a system memory. In some aspects, when a number of pointers of the free memory list cache exceeds a high threshold value, a full buffer of the free memory list cache is emptied to the system memory. In this manner, memory access operations for emptying and refilling the free memory list cache may be minimized.Type: ApplicationFiled: February 7, 2017Publication date: August 9, 2018Inventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
-
Publication number: 20180217930Abstract: Aspects disclosed involve reducing or avoiding buffering of evicted cache data from an uncompressed cache memory in a compression memory system when stalled write operations occur. A processor-based system is provided that includes a cache memory and a compression memory system. When a cache entry is evicted from the cache memory, cache data and a virtual address associated with the evicted cache entry are provided to the compression memory system. The compression memory system reads metadata associated with the virtual address of the evicted cache entry to determine the physical address in the compression memory system mapped to the evicted cache entry. If the metadata is not available, the compression memory system stores the evicted cache data at a new, available physical address in the compression memory system without waiting for the metadata. Thus, buffering of the evicted cache data to avoid or reduce stalling write operations is not necessary.Type: ApplicationFiled: January 31, 2017Publication date: August 2, 2018Inventors: Christopher Edward Koob, Richard Senior, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
-
Publication number: 20180173623Abstract: Aspects disclosed involve reducing or avoiding buffering evicted cache data from an uncompressed cache memory in a compressed memory system to avoid stalling write operations. Metadata is included in cache entries in the uncompressed cache memory, which is used for mapping cache entries to physical addresses in the compressed memory system. When a cache entry is evicted, the compressed memory system uses the metadata associated with the evicted cache data to determine the physical address in the compressed system memory for storing the evicted cache data. In this manner, the compressed memory system does not have to incur the latency associated with reading the metadata for the evicted cache entry from another memory structure that may otherwise require buffering the evicted cache data until the metadata becomes available, to write the evicted cache data to the compressed system memory to avoid stalling write operations.Type: ApplicationFiled: December 21, 2016Publication date: June 21, 2018Inventors: Christopher Edward Koob, Richard Senior, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
-
Patent number: 9998143Abstract: A system for data decompression may include a processor coupled to a remote memory having a remote dictionary stored thereon and coupled to a decompression logic having a local memory with a local dictionary. The processor may decompress data during execution by accessing the local dictionary, and if necessary, the remote dictionary.Type: GrantFiled: August 5, 2016Date of Patent: June 12, 2018Assignee: QUALCOMM IncorporatedInventors: Richard Senior, Amin Ansari, Jinxia Bai, Vito Bica