Patents by Inventor Christopher Haywood
Christopher Haywood has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240119001Abstract: Disclosed are techniques for storing data decompressed from the compressed pages of a memory block when servicing data access request from a host device of a memory system to the compressed page data in which the memory block has been compressed into multiple compressed pages. A cache buffer may store the decompressed data for a few compressed pages to save decompression memory space. The memory system may keep track of the number of accesses to the decompressed data in the cache and the number of compressed pages that have been decompressed into the cache to calculate a metric associated with the frequency of access to the compressed pages within the memory block. If the metric does not exceed a threshold, additional compressed pages are decompressed into the cache. Otherwise, all the compressed pages within the memory block are decompressed into a separately allocated memory space to reduce data access latency.Type: ApplicationFiled: October 6, 2023Publication date: April 11, 2024Inventors: Taeksang Song, Christopher Haywood, Evan Lawrence Erickson
-
Publication number: 20240111449Abstract: A cascaded memory system includes a memory module having a primary interface coupled to a memory controller via a first communication channel and a secondary interface coupled to a second memory module via a second communication channel. The first memory module buffers and repeats signals received on the primary and secondary interfaces to enable communications between the memory controller and the secondary memory module.Type: ApplicationFiled: September 13, 2023Publication date: April 4, 2024Inventors: Christopher Haywood, Frederick A. Ware
-
Publication number: 20240096434Abstract: Many error correction schemes fail to correct for double-bit errors and a module must be replaced when these double-bit errors occur repeatedly at the same address. This helps prevent data corruption. In an embodiment, the addresses for one of the memory devices exhibiting a single-bit error (but not the other also exhibiting a single bit error) is transformed before the internal memory arrays are accessed. This has the effect of moving one of the error prone memory cells to a different external (to the module) address such that there is only one error prone bit that is accessed by the previously double-bit error prone address. Thus, a double-bit error at the original address is remapped into two correctable single-bit errors that are at different addresses.Type: ApplicationFiled: September 27, 2023Publication date: March 21, 2024Inventor: Christopher HAYWOOD
-
Patent number: 11914863Abstract: A serial data buffer integrated circuit comprises unidirectional host-side input and output ports, and unidirectional memory-side input and output ports. Scheduling logic generates memory device commands for writing to and reading from a memory device based on a set of host-side input packets received from a memory controller. A unidirectional serial host side input port receives host-side input packets from the memory controller. A unidirectional serial memory side output port transmits the memory device commands and the write data to the memory device based on the scheduled timing. A unidirectional serial memory side input port receives read data from the memory device in response to a read command, and a unidirectional serial host side output port transmits the read data to the memory controller within the timing constraints of the memory device.Type: GrantFiled: July 8, 2022Date of Patent: February 27, 2024Assignee: RAMBUS INC.Inventor: Christopher Haywood
-
Publication number: 20240036726Abstract: A buffer/interface device of a memory node may read and compress fixed size blocks of data (e.g., pages). The size of each of the resulting compressed blocks of data is dependent on the data patterns in the original blocks of data. Fixed sized blocks of data are divided into fixed size sub-blocks (a.k.a., slots) for storing the resulting compressed blocks of data at with sub-block granularity. Pointers to the start of compressed pages are maintained at the final level of the memory node page tables in order to allow access to compressed pages. Upon receiving an access to a location within a compressed page, only the slots containing the compressed page need to be read and decompressed. The memory node page table entries may also include a content indicator (e.g., flag) that indicates whether any page within the block of memory associated with that page table entry is compressed.Type: ApplicationFiled: July 10, 2023Publication date: February 1, 2024Inventors: Evan Lawrence ERICKSON, Christopher HAYWOOD
-
Publication number: 20240012565Abstract: A buffer integrated circuit (IC) chip is disclosed. The buffer IC chip includes host interface circuitry to receive a request from at least one host. The request includes at least one command to perform a memory compression operation on first uncompressed data that is stored in a first memory region. Compression circuitry, in response to the at least one command, compresses the first uncompressed data to first compressed data. The first compressed data is transferred to a second memory region.Type: ApplicationFiled: July 6, 2023Publication date: January 11, 2024Inventors: Evan Lawrence Erickson, Christopher Haywood, Craig E. Hampel
-
Patent number: 11854658Abstract: A method for operating a DRAM device. The method includes receiving in a memory buffer in a first memory module hosted by a computing system, a request for data stored in RAM of the first memory module from a host controller of the computing system. The method includes receiving with the memory buffer, the data associated with a RAM, in response to the request and formatting with the memory buffer, the data into a scrambled data in response to a pseudo-random process. The method includes initiating with the memory buffer, transfer of the scrambled data into an interface device.Type: GrantFiled: March 16, 2022Date of Patent: December 26, 2023Assignee: Rambus Inc.Inventors: Christopher Haywood, David Wang
-
Patent number: 11841793Abstract: Computing devices, methods, and systems for switch-based free memory tracking in data center environments are disclosed. An exemplary switch integrated circuit (IC), which is used in a switched fabric or a network, can include a processing device and a tracking structure that is distributed with at least a second switch IC. The tracking structure tracks free memory units that are accessible in a first set of nodes by the second switch IC. The processing device receives a request for a number of free memory units. The processing device forwards the request to a node in the first set of nodes that has at least the number of free memory units or forwards the request to the second switch IC that has at least the number of free memory units or responds to the request with a response that indicates that the request could not be fulfilled.Type: GrantFiled: January 20, 2022Date of Patent: December 12, 2023Assignee: Rambus Inc.Inventors: Steven C. Woo, Christopher Haywood, Evan Lawrence Erickson
-
Publication number: 20230376412Abstract: An integrated circuit device includes a first memory to support address translation between local addresses and fabric addresses and a processing circuit, operatively coupled to the first memory. The processing circuit allocates, on a dynamic basis as a donor, a portion of first local memory of a local server as first far memory for access for a first remote server, or as a requester receives allocation of second far memory from the first remote server or a second remote server for access by the local server. The processing circuit bridges the access by the first remote server to the allocated portion of first local memory as the first far memory, through the fabric addresses and the address translation supported by the first memory, or bridge the access by the local server to the second far memory, through the address translation supported by the first memory, and the fabric addresses.Type: ApplicationFiled: October 11, 2021Publication date: November 23, 2023Inventors: Evan Lawrence Erickson, Christopher Haywood
-
Publication number: 20230359559Abstract: A hybrid volatile/non-volatile memory module employs a relatively fast, durable, and expensive dynamic, random-access memory (DRAM) cache to store a subset of data from a larger amount of relatively slow and inexpensive nonvolatile memory (NVM). A module controller prioritizes accesses to the DRAM cache for improved speed performance and to minimize programming cycles to the NVM. Data is first written to the DRAM cache where it can be accessed (written to and read from) without the aid of the NVM. Data is only written to the NVM when that data is evicted from the DRAM cache to make room for additional data. Mapping tables relating NVM addresses to physical addresses are distributed throughout the DRAM cache using cache line bits that are not used for data.Type: ApplicationFiled: May 30, 2023Publication date: November 9, 2023Inventors: Frederick A. Ware, John Eric Linstadt, Christopher Haywood
-
Publication number: 20230359554Abstract: A buffer/interface device of a memory node reads a block of data (e.g., page). As each unit of data (e.g., cache line sized) of the block is read, it is compared against one or more predefined patterns (e.g., all 0's, all 1's, etc.). If the block (page) is only storing one of the predefined patterns, a flag in the page table entry for the block is set to indicate the block is only storing one of the predefined patterns. The physical memory the block was occupying may then be deallocated so other data may be stored using those physical memory addresses.Type: ApplicationFiled: April 27, 2023Publication date: November 9, 2023Inventors: Evan Lawrence ERICKSON, Christopher HAYWOOD
-
Patent number: 11804277Abstract: Many error correction schemes fail to correct for double-bit errors and a module must be replaced when these double-bit errors occur repeatedly at the same address. This helps prevent data corruption. In an embodiment, the addresses for one of the memory devices exhibiting a single-bit error (but not the other also exhibiting a single bit error) is transformed before the internal memory arrays are accessed. This has the effect of moving one of the error prone memory cells to a different external (to the module) address such that there is only one error prone bit that is accessed by the previously double-bit error prone address. Thus, a double-bit error at the original address is remapped into two correctable single-bit errors that are at different addresses.Type: GrantFiled: April 18, 2022Date of Patent: October 31, 2023Assignee: Rambus Inc.Inventor: Christopher Haywood
-
Patent number: 11803323Abstract: A cascaded memory system includes a memory module having a primary interface coupled to a memory controller via a first communication channel and a secondary interface coupled to a second memory module via a second communication channel. The first memory module buffers and repeats signals received on the primary and secondary interfaces to enable communications between the memory controller and the secondary memory module.Type: GrantFiled: April 28, 2020Date of Patent: October 31, 2023Assignee: RAMBUS INC.Inventors: Christopher Haywood, Frederick A. Ware
-
Publication number: 20230333989Abstract: Memory pages are background-relocated from a low-latency local operating memory of a server computer to a higher-latency memory installation that enables high-resolution access monitoring and thus access-demand differentiation among the relocated memory pages. Higher access-demand memory pages are background-restored to the low-latency operating memory, while lower access-demand pages are maintained in the higher latency memory installation and yet-lower access-demand pages are optionally moved to yet higher-latency memory installation.Type: ApplicationFiled: April 25, 2023Publication date: October 19, 2023Inventors: Evan Lawrence Erickson, Christopher Haywood, Mark D. Kellam
-
Publication number: 20230315656Abstract: In a modular memory system, a memory control component, first and second memory sockets and data buffer components are all mounted to the printed circuit board. The first and second memory sockets have electrical contacts to electrically engage counterpart electrical contacts of memory modules to be inserted therein, and each of the data buffer components includes a primary data interface electrically coupled to the memory control component, and first and second secondary data interfaces electrically coupled to subsets of the electrical contacts within the first and second memory sockets, respectively.Type: ApplicationFiled: March 1, 2023Publication date: October 5, 2023Inventors: Frederick A. Ware, Christopher Haywood
-
Publication number: 20230305891Abstract: A memory allocation device for deployment within a host server computer includes control circuitry, a first interface to a local processing unit disposed within the host computer and local operating memory disposed within the host computer, and a second interface to a remote computer. The control circuitry allocates a first portion of the local memory to a first process executed by the local processing unit and transmits, to the remote computer via the second interface, a request to allocate to a second process executed by the local processing unit a first portion of a remote memory disposed within the remote computer. The control circuitry further receives instructions via the first interface to store data at a memory address within the first portion of the remote memory and transmits those instructions to the remote computer via the second interface.Type: ApplicationFiled: January 9, 2023Publication date: September 28, 2023Inventors: Christopher Haywood, Evan Lawrence Erickson
-
Patent number: 11714752Abstract: A hybrid volatile/non-volatile memory module employs a relatively fast, durable, and expensive dynamic, random-access memory (DRAM) cache to store a subset of data from a larger amount of relatively slow and inexpensive nonvolatile memory (NVM). A module controller prioritizes accesses to the DRAM cache for improved speed performance and to minimize programming cycles to the NVM. Data is first written to the DRAM cache where it can be accessed (written to and read from) without the aid of the NVM. Data is only written to the NVM when that data is evicted from the DRAM cache to make room for additional data. Mapping tables relating NVM addresses to physical addresses are distributed throughout the DRAM cache using cache line bits that are not used for data.Type: GrantFiled: March 23, 2022Date of Patent: August 1, 2023Assignee: Rambus Inc.Inventors: Frederick A. Ware, John Eric Linstadt, Christopher Haywood
-
Patent number: 11671522Abstract: Embodiments of the present invention are directed to memories used in server applications. More specifically, embodiments of the present invention provide a server that has memory management module that is connected to the processor using one or more DDR channels. The memory management module is configured to provide the processor local access and network access to memories on a network. There are other embodiments as well.Type: GrantFiled: August 27, 2021Date of Patent: June 6, 2023Assignee: Rambus Inc.Inventor: Christopher Haywood
-
Patent number: 11663138Abstract: Memory pages are background-relocated from a low-latency local operating memory of a server computer to a higher-latency memory installation that enables high-resolution access monitoring and thus access-demand differentiation among the relocated memory pages. Higher access-demand memory pages are background-restored to the low-latency operating memory, while lower access-demand pages are maintained in the higher latency memory installation and yet-lower access-demand pages are optionally moved to yet higher-latency memory installation.Type: GrantFiled: December 6, 2021Date of Patent: May 30, 2023Assignee: Rambus Inc.Inventors: Evan Lawrence Erickson, Christopher Haywood, Mark D. Kellam
-
Patent number: 11657007Abstract: A multi-path fabric interconnected system with many nodes and many communication paths from a given source node to a given destination node. A memory allocation device on an originating node (local node) requests an allocation of memory from a remote node (i.e., requests a remote allocation). The memory allocation device on the local node selects the remote node based on one or more performance indicators. The local memory allocation device may select the remote node to provide a remote allocation of memory based on one or more of: latency, availability, multi-path bandwidth, data access patterns (both local and remote), fabric congestion, allowed bandwidth limits, maximum latency limits, and, available memory on remote node.Type: GrantFiled: May 28, 2021Date of Patent: May 23, 2023Assignee: Rambus Inc.Inventors: Christopher Haywood, Evan Lawrence Erickson