Patents by Inventor Mattheus Cornelis Antonius Adrianus Heddes

Mattheus Cornelis Antonius Adrianus Heddes has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180074949
    Abstract: Providing memory bandwidth compression using adaptive compression in central processing unit (CPU)-based systems is disclosed. In one aspect, a compressed memory controller (CMC) is configured to implement two compression mechanisms: a first compression mechanism for compressing small amounts of data (e.g., a single memory line), and a second compression mechanism for compressing large amounts of data (e.g., multiple associated memory lines). When performing a memory write operation using write data that includes multiple associated memory lines, the CMC compresses each of the memory lines separately using the first compression mechanism, and also compresses the memory lines together using the second compression mechanism. If the result of the second compression is smaller than the result of the first compression, the CMC stores the second compression result in the system memory. Otherwise, the first compression result is stored.
    Type: Application
    Filed: September 15, 2016
    Publication date: March 15, 2018
    Inventors: Colin Beaton Verrilli, Natarajan Vaidhyanathan, Mattheus Cornelis Antonius Adrianus Heddes
  • Publication number: 20180074893
    Abstract: Providing memory bandwidth compression in chipkill-correct memory architectures is disclosed. In this regard, a compressed memory controller (CMC) introduces a specified error pattern into chipkill-correct error correcting code (ECC) bits to indicate compressed data. To encode data, the CMC applies a compression algorithm to an uncompressed data block to generate a compressed data block. The CMC then generates ECC data for the compressed data block (i.e., an “inner” ECC segment), appends the inner ECC segment to the compressed data block, and generates ECC data for the compressed data block and the inner ECC segment (i.e., an “outer” ECC segment). The CMC then intentionally inverts a specified plurality of bytes of the outer ECC segment (e.g., in portions of the outer ECC segment stored in different physical memory chips by a chipkill-correct ECC mechanism). The outer ECC segment is then appended to the compressed data block and the inner ECC segment.
    Type: Application
    Filed: September 15, 2016
    Publication date: March 15, 2018
    Inventors: Natarajan Vaidhyanathan, Luther James Blackwood, Mattheus Cornelis Antonius Adrianus Heddes, Michael Raymond Trombley, Colin Beaton Verrilli
  • Publication number: 20180018122
    Abstract: Providing memory bandwidth compression using compression indicator (CI) hint directories in a central processing unit (CPU)-based system is disclosed. In this regard, a compressed memory controller provides multiple CI hint directory entries, each providing a plurality of CI hints. The compressed memory controller receives a memory write request comprising write data, determines a compression pattern for the write data, and generates a CI for the write data based on the compression pattern. The compressed memory controller writes the write data to the memory line, and writes the generated CI into one or more ECC bits of the memory line. In parallel, the compressed memory controller determines whether the physical address corresponds to a CI hint directory entry, and, if so, a CI hint of the CI hint directory entry corresponding to the physical address is updated based on the generated CI.
    Type: Application
    Filed: September 28, 2017
    Publication date: January 18, 2018
    Inventors: Colin Beaton Verrilli, Mattheus Cornelis Antonius Adrianus Heddes, Natarajan Vaidhyanathan
  • Publication number: 20180018268
    Abstract: Providing memory bandwidth compression using multiple last-level cache (LLC) lines in a central processing unit (CPU)-based system is disclosed. In some aspects, a compressed memory controller (CMC) provides an LLC comprising multiple LLC lines, each providing a plurality of sub-lines the same size as a system cache line. The contents of the system cache line(s) stored within a single LLC line are compressed and stored in system memory within the memory sub-line region corresponding to the LLC line. A master table stores information indicating how the compressed data for an LLC line is stored in system memory by storing an offset value and a length value for each sub-line within each LLC line. By compressing multiple system cache lines together and storing compressed data in a space normally allocated to multiple uncompressed system lines, the CMC enables compression sizes to be smaller than the memory read/write granularity of the system memory.
    Type: Application
    Filed: September 28, 2017
    Publication date: January 18, 2018
    Inventors: Colin Beaton Verrilli, Mattheus Cornelis Antonius Adrianus Heddes, Mark Anthony Rinaldi, Natarajan Vaidhyanathan
  • Publication number: 20170286001
    Abstract: Providing memory bandwidth compression using compression indicator (CI) hint directories in a central processing unit (CPU)-based system is disclosed. In this regard, a compressed memory controller provides a CI hint directory comprising a plurality of CI hint directory entries, each providing a plurality of CI hints. The compressed memory controller is configured to receive a memory read request comprising a physical address of a memory line, and initiate a memory read transaction comprising a requested read length value. The compressed memory controller is further configured to, in parallel with initiating the memory read transaction, determine whether the physical address corresponds to a CI hint directory entry in the CI hint directory. If so, the compressed memory controller reads a CI hint from the CI hint directory entry of the CI hint directory, and modifies the requested read length value of the memory read transaction based on the CI hint.
    Type: Application
    Filed: March 31, 2016
    Publication date: October 5, 2017
    Inventors: Colin Beaton Verrilli, Mattheus Cornelis Antonius Adrianus Heddes, Natarajan Vaidhyanathan
  • Publication number: 20170286214
    Abstract: Providing space-efficient storage for dynamic random access memory (DRAM) cache tags is provided. In one aspect, a DRAM cache management circuit provides a plurality of cache entries, each of which contains a tag storage region, a data storage region, and an error protection region. The DRAM cache management circuit is configured to store data to be cached in the data storage region of each cache entry. The DRAM cache management circuit is also configured to use an error detection code (EDC) instead of an error correcting code (ECC), and to store a tag and the EDC for each cache entry in the error protection region of the cache entry. In this manner, the capacity of a DRAM cache can be increased by avoiding the need for the tag storage region for each cache entry, while still providing error detection for the cache entry.
    Type: Application
    Filed: March 30, 2016
    Publication date: October 5, 2017
    Inventors: Natarajan Vaidhyanathan, Mattheus Cornelis Antonius Adrianus Heddes, Colin Beaton Verrilli
  • Publication number: 20170286308
    Abstract: Providing memory bandwidth compression using multiple last-level cache (LLC) lines in a central processing unit (CPU)-based system is disclosed. In some aspects, a compressed memory controller (CMC) provides an LLC comprising multiple LLC lines, each providing a plurality of sub-lines the same size as a system cache line. The contents of the system cache line(s) stored within a single LLC line are compressed and stored in system memory within the memory sub-line region corresponding to the LLC line. A master table stores information indicating how the compressed data for an LLC line is stored in system memory by storing an offset value and a length value for each sub-line within each LLC line. By compressing multiple system cache lines together and storing compressed data in a space normally allocated to multiple uncompressed system lines, the CMC enables compression sizes to be smaller than the memory read/write granularity of the system memory.
    Type: Application
    Filed: March 31, 2016
    Publication date: October 5, 2017
    Inventors: Colin Beaton Verrilli, Mattheus Cornelis Antonius Adrianus Heddes, Mark Anthony Rinaldi, Natarajan Vaidhyanathan
  • Publication number: 20170242793
    Abstract: Providing scalable dynamic random access memory (DRAM) cache management using DRAM cache indicator caches is provided. In one aspect, a DRAM cache management circuit is provided to manage access to a DRAM cache in high-bandwidth memory. The DRAM cache management circuit comprises a DRAM cache indicator cache, which stores master table entries that are read from a master table in a system memory DRAM and that contain DRAM cache indicators. The DRAM cache indicators enable the DRAM cache management circuit to determine whether a memory line in the system memory DRAM is cached in the DRAM cache of high-bandwidth memory, and, if so, in which way of the DRAM cache the memory line is stored.
    Type: Application
    Filed: August 4, 2016
    Publication date: August 24, 2017
    Inventors: Natarajan Vaidhyanathan, Mattheus Cornelis Antonius Adrianus Heddes, Colin Beaton Verrilli
  • Patent number: 9740621
    Abstract: Memory controllers employing memory capacity and/or bandwidth compression with next read address prefetching, and related processor-based systems and methods are disclosed. In certain aspects, memory controllers are employed that can provide memory capacity compression. In certain aspects disclosed herein, a next read address prefetching scheme can be used by a memory controller to speculatively prefetch data from system memory at another address beyond the currently accessed address. Thus, when memory data is addressed in the compressed memory, if the next read address is stored in metadata associated with the memory block at the accessed address, the memory data at the next read address can be prefetched by the memory controller to be available in case a subsequent read operation issued by a central processing unit (CPU) has been prefetched by the memory controller.
    Type: Grant
    Filed: May 19, 2015
    Date of Patent: August 22, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Mattheus Cornelis Antonius Adrianus Heddes, Natarajan Vaidhyanathan, Colin Beaton Verrilli
  • Publication number: 20170212840
    Abstract: Providing scalable dynamic random access memory (DRAM) cache management using tag directory caches is provided. In one aspect, a DRAM cache management circuit is provided to manage access to a DRAM cache in a high-bandwidth memory. The DRAM cache management circuit comprises a tag directory cache and a tag directory cache directory. The tag directory cache stores tags of frequently accessed cache lines in the DRAM cache, while the tag directory cache directory stores tags for the tag directory cache. The DRAM cache management circuit uses the tag directory cache and the tag directory cache directory to determine whether data associated with a memory address is cached in the DRAM cache of the high-bandwidth memory. Based on the tag directory cache and the tag directory cache directory, the DRAM cache management circuit may determine whether a memory operation can be performed using the DRAM cache and/or a system memory DRAM.
    Type: Application
    Filed: June 24, 2016
    Publication date: July 27, 2017
    Inventors: Hien Minh Le, Thuong Quang Truong, Natarajan Vaidhyanathan, Mattheus Cornelis Antonius Adrianus Heddes, Colin Beaton Verrilli
  • Publication number: 20160224241
    Abstract: Providing memory bandwidth compression using back-to-back read operations by compressed memory controllers (CMCs) in a central processing unit (CPU)-based system is disclosed. In this regard, in some aspects, a CMC is configured to receive a memory read request to a physical address in a system memory, and read a compression indicator (CI) for the physical address from error correcting code (ECC) bits of a first memory block in a memory line associated with the physical address. Based on the CI, the CMC determines whether the first memory block comprises compressed data. If not, the CMC performs a back-to-back read of one or more additional memory blocks of the memory line in parallel with returning the first memory block. Some aspects may further improve memory access latency by writing compressed data to each of a plurality of memory blocks of the memory line, rather than only to the first memory block.
    Type: Application
    Filed: September 3, 2015
    Publication date: August 4, 2016
    Inventors: Colin Beaton Verrilli, Mattheus Cornelis Antonius Adrianus Heddes, Brian Joel Schuh, Michael Raymond Trombley, Natarajan Vaidhyanathan
  • Publication number: 20150339237
    Abstract: Memory controllers employing memory capacity and/or bandwidth compression with next read address prefetching, and related processor-based systems and methods are disclosed. In certain aspects, memory controllers are employed that can provide memory capacity compression. In certain aspects disclosed herein, a next read address prefetching scheme can be used by a memory controller to speculatively prefetch data from system memory at another address beyond the currently accessed address. Thus, when memory data is addressed in the compressed memory, if the next read address is stored in metadata associated with the memory block at the accessed address, the memory data at the next read address can be prefetched by the memory controller to be available in case a subsequent read operation issued by a central processing unit (CPU) has been prefetched by the memory controller.
    Type: Application
    Filed: May 19, 2015
    Publication date: November 26, 2015
    Inventors: Mattheus Cornelis Antonius Adrianus Heddes, Natarajan Vaidhyanathan, Colin Beaton Verrilli
  • Publication number: 20150339228
    Abstract: Aspects disclosed herein include memory controllers employing memory capacity compression, and related processor-based systems and methods. In certain aspects, compressed memory controllers are employed that can provide memory capacity compression. In some aspects, a line-based memory capacity compression scheme can be employed where additional translation of a physical address (PA) to a physical buffer address is performed to allow compressed data in a system memory at the physical buffer address for efficient compressed data storage. A translation lookaside buffer (TLB) may also be employed to store TLB entries comprising PA tags corresponding to a physical buffer address in the system memory to more efficiently perform the translation of the PA to the physical buffer address in the system memory. In certain aspects, a line-based memory capacity compression scheme, a page-based memory capacity compression scheme, or a hybrid line-page-based memory capacity compression scheme can be employed.
    Type: Application
    Filed: May 19, 2015
    Publication date: November 26, 2015
    Inventors: Mattheus Cornelis Antonius Adrianus Heddes, Natarajan Vaidhyanathan, Colin Beaton Verrilli
  • Publication number: 20150339239
    Abstract: Providing memory bandwidth compression using compressed memory controllers (CMCs) in a central processing unit (CPU)-based system is disclosed. In this regard, in some aspects, a CMC is configured to receive a memory read request to a physical address in a system memory, and read a compression indicator (CI) for the physical address from a master directory and/or from error correcting code (ECC) bits of the physical address. Based on the CI, the CMC determines a number of memory blocks to be read for the memory read request, and reads the determined number of memory blocks. In some aspects, a CMC is configured to receive a memory write request to a physical address in the system memory, and generate a CI for write data based on a compression pattern of the write data. The CMC updates the master directory and/or the ECC bits of the physical address with the generated CI.
    Type: Application
    Filed: May 20, 2015
    Publication date: November 26, 2015
    Inventors: Mattheus Cornelis Antonius Adrianus Heddes, Natarajan Vaidhyanathan, Colin Beaton Verrilli