Patents by Inventor Christopher Edward Koob

Christopher Edward Koob has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10482021
    Abstract: In an aspect, high priority lines are stored starting at an address aligned to a cache line size for instance 64 bytes, and low priority lines are stored in memory space left by the compression of high priority lines. The space left by the high priority lines and hence the low priority lines themselves are managed through pointers also stored in memory. In this manner, low priority lines contents can be moved to different memory locations as needed. The efficiency of higher priority compressed memory accesses is improved by removing the need for indirection otherwise required to find and access compressed memory lines, this is especially advantageous for immutable compressed contents. The use of pointers for low priority is advantageous due to the full flexibility of placement, especially for mutable compressed contents that may need movement within memory for instance as it changes in size over time.
    Type: Grant
    Filed: June 24, 2016
    Date of Patent: November 19, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Andres Alejandro Oportus Valenzuela, Nieyan Geng, Christopher Edward Koob, Gurvinder Singh Chhabra, Richard Senior, Anand Janakiraman
  • Patent number: 10198362
    Abstract: Reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems is disclosed. In this regard, a memory system including a compression circuit is provided. The compression circuit includes a compress circuit that is configured to cache free memory lists using free memory list caches comprising a plurality of buffers. When a number of pointers cached within the free memory list cache falls below a low threshold value, an empty buffer of the plurality of buffers is refilled from a system memory. In some aspects, when a number of pointers of the free memory list cache exceeds a high threshold value, a full buffer of the free memory list cache is emptied to the system memory. In this manner, memory access operations for emptying and refilling the free memory list cache may be minimized.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: February 5, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
  • Patent number: 10169246
    Abstract: Reducing metadata size in compressed memory systems of processor-based systems is disclosed. In one aspect, a compressed memory system provides 2N compressed data regions, corresponding 2N sets of free memory lists, and a metadata circuit. The metadata circuit associates virtual addresses with abbreviated physical addresses, which omit N upper bits of corresponding full physical addresses, of memory blocks of the 2N compressed data regions. A compression circuit of the compressed memory system receives a memory access request including a virtual address, and selects one of the 2N compressed data regions and one of the 2N sets of free memory lists based on a modulus of the virtual address and 2N. The compression circuit retrieves an abbreviated physical address corresponding to the virtual address from the metadata circuit, and performs a memory access operation on a memory block associated with the abbreviated physical address in the selected compressed data region.
    Type: Grant
    Filed: May 11, 2017
    Date of Patent: January 1, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
  • Publication number: 20180329830
    Abstract: Reducing metadata size in compressed memory systems of processor-based systems is disclosed. In one aspect, a compressed memory system provides 2N compressed data regions, corresponding 2N sets of free memory lists, and a metadata circuit. The metadata circuit associates virtual addresses with abbreviated physical addresses, which omit N upper bits of corresponding full physical addresses, of memory blocks of the 2N compressed data regions. A compression circuit of the compressed memory system receives a memory access request including a virtual address, and selects one of the 2N compressed data regions and one of the 2N sets of free memory lists based on a modulus of the virtual address and 2N. The compression circuit retrieves an abbreviated physical address corresponding to the virtual address from the metadata circuit, and performs a memory access operation on a memory block associated with the abbreviated physical address in the selected compressed data region.
    Type: Application
    Filed: May 11, 2017
    Publication date: November 15, 2018
    Inventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
  • Patent number: 10114756
    Abstract: A method includes reading, by a processor, one or more configuration values from a storage device or a memory management unit. The method also includes loading the one or more configuration values into one or more registers of the processor. The one or more registers are useable by the processor to perform address translation.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: October 30, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Christopher Edward Koob, Erich James Plondke, Piyush Patel, Thomas Andrew Sartorius, Lucian Codrescu
  • Patent number: 10102031
    Abstract: Systems and methods relate to managing shared resources in a multithreaded processor comprising two or more processing threads. Danger levels for the two or more threads are determined, wherein the danger level of a thread is based on a potential failure of the thread to meet a deadline due to unavailability of a shared resource. Priority levels associated with the two or more threads are also determined, wherein the priority level is higher for a thread whose failure to meet a deadline is unacceptable and the priority level is lower for a thread whose failure to meet a deadline is acceptable. The two or more threads are scheduled based at least on the determined danger levels for the two or more threads and priority levels associated with the two or more threads.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: October 16, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Serag Monier Gadelrab, Christopher Edward Koob, Simon Booth, Aris Balatsos, Johnny Jone Wai Kuan, Myil Ramkumar, Bhupinder Singh Pabla, Sean David Sweeney, George Patsilaras
  • Patent number: 10061698
    Abstract: Aspects disclosed involve reducing or avoiding buffering of evicted cache data from an uncompressed cache memory in a compression memory system when stalled write operations occur. A processor-based system is provided that includes a cache memory and a compression memory system. When a cache entry is evicted from the cache memory, cache data and a virtual address associated with the evicted cache entry are provided to the compression memory system. The compression memory system reads metadata associated with the virtual address of the evicted cache entry to determine the physical address in the compression memory system mapped to the evicted cache entry. If the metadata is not available, the compression memory system stores the evicted cache data at a new, available physical address in the compression memory system without waiting for the metadata. Thus, buffering of the evicted cache data to avoid or reduce stalling write operations is not necessary.
    Type: Grant
    Filed: January 31, 2017
    Date of Patent: August 28, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Christopher Edward Koob, Richard Senior, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
  • Publication number: 20180225224
    Abstract: Reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems is disclosed. In this regard, a memory system including a compression circuit is provided. The compression circuit includes a compress circuit that is configured to cache free memory lists using free memory list caches comprising a plurality of buffers. When a number of pointers cached within the free memory list cache falls below a low threshold value, an empty buffer of the plurality of buffers is refilled from a system memory. In some aspects, when a number of pointers of the free memory list cache exceeds a high threshold value, a full buffer of the free memory list cache is emptied to the system memory. In this manner, memory access operations for emptying and refilling the free memory list cache may be minimized.
    Type: Application
    Filed: February 7, 2017
    Publication date: August 9, 2018
    Inventors: Richard Senior, Christopher Edward Koob, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
  • Publication number: 20180217930
    Abstract: Aspects disclosed involve reducing or avoiding buffering of evicted cache data from an uncompressed cache memory in a compression memory system when stalled write operations occur. A processor-based system is provided that includes a cache memory and a compression memory system. When a cache entry is evicted from the cache memory, cache data and a virtual address associated with the evicted cache entry are provided to the compression memory system. The compression memory system reads metadata associated with the virtual address of the evicted cache entry to determine the physical address in the compression memory system mapped to the evicted cache entry. If the metadata is not available, the compression memory system stores the evicted cache data at a new, available physical address in the compression memory system without waiting for the metadata. Thus, buffering of the evicted cache data to avoid or reduce stalling write operations is not necessary.
    Type: Application
    Filed: January 31, 2017
    Publication date: August 2, 2018
    Inventors: Christopher Edward Koob, Richard Senior, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
  • Patent number: 10025711
    Abstract: Embodiments disclosed in the detailed description include hybrid write-through/write-back cache policy managers, and related systems and methods. A cache write policy manager is configured to determine whether at least two caches among a plurality of parallel caches are active. If all of one or more other caches are not active, the cache write policy manager is configured to instruct an active cache among the parallel caches to apply a write-hack cache policy. In this manner, the cache write policy manager may conserve power and/or increase performance of a singly active processor core. If any of the one or more other caches are active, the cache write policy manager is configured to instruct an active cache among the parallel caches to apply a write-through cache policy. In this manner, the cache write policy manager facilitates data coherency among the parallel caches when multiple processor cores are active.
    Type: Grant
    Filed: May 14, 2012
    Date of Patent: July 17, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Peter G. Sassone, Christopher Edward Koob, Dana M. Vantrease, Suresh K. Venkumahanti, Lucian Codrescu
  • Publication number: 20180173623
    Abstract: Aspects disclosed involve reducing or avoiding buffering evicted cache data from an uncompressed cache memory in a compressed memory system to avoid stalling write operations. Metadata is included in cache entries in the uncompressed cache memory, which is used for mapping cache entries to physical addresses in the compressed memory system. When a cache entry is evicted, the compressed memory system uses the metadata associated with the evicted cache data to determine the physical address in the compressed system memory for storing the evicted cache data. In this manner, the compressed memory system does not have to incur the latency associated with reading the metadata for the evicted cache entry from another memory structure that may otherwise require buffering the evicted cache data until the metadata becomes available, to write the evicted cache data to the compressed system memory to avoid stalling write operations.
    Type: Application
    Filed: December 21, 2016
    Publication date: June 21, 2018
    Inventors: Christopher Edward Koob, Richard Senior, Gurvinder Singh Chhabra, Andres Alejandro Oportus Valenzuela, Nieyan Geng, Raghuveer Raghavendra, Christopher Porter, Anand Janakiraman
  • Patent number: 9858201
    Abstract: A translation lookaside buffer (TLB) stores translation entries. The translation entries include a virtual address, a physical address and a memory local/not-local flag. When a processor is in a low power/local memory mode a virtual address is received. A matching translation entry has a local/not-local flag. Upon the local/not-local flag indicating the physical address of the matching translation entry being outside the local memory, an out-of-access-range memory access exception is generated.
    Type: Grant
    Filed: February 20, 2015
    Date of Patent: January 2, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Christopher Edward Koob, Erich James Plondke, Jiajin Tu
  • Publication number: 20170371792
    Abstract: In an aspect, high priority lines are stored starting at an address aligned to a cache line size for instance 64 bytes, and low priority lines are stored in memory space left by the compression of high priority lines. The space left by the high priority lines and hence the low priority lines themselves are managed through pointers also stored in memory. In this manner, low priority lines contents can be moved to different memory locations as needed. The efficiency of higher priority compressed memory accesses is improved by removing the need for indirection otherwise required to find and access compressed memory lines, this is especially advantageous for immutable compressed contents.
    Type: Application
    Filed: June 24, 2016
    Publication date: December 28, 2017
    Inventors: Andres Alejandro OPORTUS VALENZUELA, Nieyan GENG, Christopher Edward KOOB, Gurvinder Singh CHHABRA, Richard SENIOR, Anand JANAKIRAMAN
  • Patent number: 9824013
    Abstract: Systems and methods for allocation of cache lines in a shared partitioned cache of a multi-threaded processor. A memory management unit is configured to determine attributes associated with an address for a cache entry associated with a processing thread to be allocated in the cache. A configuration register is configured to store cache allocation information based on the determined attributes. A partitioning register is configured to store partitioning information for partitioning the cache into two or more portions. The cache entry is allocated into one of the portions of the cache based on the configuration register and the partitioning register.
    Type: Grant
    Filed: May 8, 2012
    Date of Patent: November 21, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Christopher Edward Koob, Ajay Anant Ingle, Lucian Codrescu, Suresh K. Venkumahanti
  • Patent number: 9785211
    Abstract: The feature size of semiconductor devices continues to decrease in each new generation. Smaller channel lengths lead to increased leakage currents. To reduce leakage current, some power domains within a device may be powered off (e.g., power collapsed) during periods of inactivity. However, when power is returned to the collapsed domains, circuitry in other power domains may experience significant processing overhead associated with reconfiguring communication channels to the newly powered domains. Provided in the present disclosure are exemplary techniques for isolating power domains to promote flexible power collapse while better managing the processing overhead associated with reestablishing data connections.
    Type: Grant
    Filed: February 13, 2015
    Date of Patent: October 10, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Christopher Edward Koob, Xufeng Chen, Robert Allan Lester, Manojkumar Pyla, Peixin Zhong
  • Patent number: 9767025
    Abstract: Systems and methods for maintaining cache coherency in a multiprocessor system with shared memory, including a write-data-invalid (WDI) state configured to reduce stalls during write operations. The WDI state is a dataless state with guaranteed write permissions. When a first processor of the multiprocessor system makes a write request for a first cache entry of a first cache, the WDI state associated with the first cache entry includes write permissions for the write to directly proceed to one or more higher levels of memory in the shared memory, such that delays associated with obtaining write permissions is reduced at the first cache. The WDI state is treated as an invalid state for a read request to the first cache entry by the first processor.
    Type: Grant
    Filed: April 18, 2012
    Date of Patent: September 19, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Christopher Edward Koob, Dana M. Vantrease
  • Patent number: 9678758
    Abstract: Systems and methods for implementing certain load instructions, such as vector load instructions by cooperation of a main processor and a coprocessor. The load instructions which are identified by the main processor for offloading to the coprocessor are committed in the main processor without receiving corresponding load data. Post-commit, the load instructions are processed in the coprocessor, such that latencies incurred in fetching the load data are hidden from the main processor. By implementing an out-of-order load data buffer associated with an in-order instruction buffer, the coprocessor is also configured to avoid stalls due to long latencies which may be involved in fetching the load data from levels of memory hierarchy, such as L2, L3, L4 caches, main memory, etc.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: June 13, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Lucian Codrescu, Christopher Edward Koob, Eric Wayne Mahurin, Suresh Kumar Venkumahanti
  • Patent number: 9658793
    Abstract: Processor access of memory is monitored. The monitoring includes identifying the accesses being to a local memory or a non-local memory. Based on the monitoring, the processor is switched from a non-local memory access mode to a local memory access mode.
    Type: Grant
    Filed: February 20, 2015
    Date of Patent: May 23, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Christopher Edward Koob, Erich James Plondke, Jiajin Tu
  • Patent number: 9606818
    Abstract: An apparatus includes a primary hypervisor that is executable on a first set of processors and a secondary hypervisor that is executable on a second set of processors. The primary hypervisor may define settings of a resource and the secondary hypervisor may use the resource based on the settings defined by the primary hypervisor. For example, the primary hypervisor may program memory address translation mappings for the secondary hypervisor. The primary hypervisor and the secondary hypervisor may include their own schedulers.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: March 28, 2017
    Assignee: Qualcomm Incorporated
    Inventors: Erich James Plondke, Lucian Codrescu, Christopher Edward Koob, Piyush Patel, Thomas Andrew Sartorius
  • Publication number: 20160350152
    Abstract: Systems and methods relate to managing shared resources in a multithreaded processor comprising two or more processing threads. Danger levels for the two or more threads are determined, wherein the danger level of a thread is based on a potential failure of the thread to meet a deadline due to unavailability of a shared resource. Priority levels associated with the two or more threads are also determined, wherein the priority level is higher for a thread whose failure to meet a deadline is unacceptable and the priority level is lower for a thread whose failure to meet a deadline is acceptable. The two or more threads are scheduled based at least on the determined danger levels for the two or more threads and priority levels associated with the two or more threads.
    Type: Application
    Filed: September 25, 2015
    Publication date: December 1, 2016
    Inventors: Serag Monier GADELRAB, Christopher Edward KOOB, Simon BOOTH, Aris BALATSOS, Johnny Jone Wai KUAN, Myil RAMKUMAR, Bhupinder Singh PABLA, Sean David SWEENEY, George PATSILARAS