Patents by Inventor Hongzhong Zheng

Hongzhong Zheng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240143173
    Abstract: A memory module includes a memory array, an interface and a controller. The memory array includes an array of memory cells and is configured as a dual in-line memory module (DIMM). The DIMM includes a plurality of connections that have been repurposed from a standard DIMM pin out configuration to interface operational status of the memory device to a host device. The interface is coupled to the memory array and the plurality of connections of the DIMM to interface the memory array to the host device. The controller is coupled to the memory array and the interface and controls at least one of a refresh operation of the memory array, control an error-correction operation of the memory array, control a memory scrubbing operation of the memory array, and control a wear-level control operation of the array, and the controller to interface with the host device.
    Type: Application
    Filed: January 9, 2024
    Publication date: May 2, 2024
    Inventors: Mu-Tien CHANG, Dimin NIU, Hongzhong ZHENG, Sun Young LIM, Indong KIM, Jangseok CHOI
  • Publication number: 20240118835
    Abstract: An SSD includes an MRAM, an NAND memory, and an SSD controller. The SSD controller is configured to receive first data from a host machine, save the first data to an SSD data buffer, fetch the first data from the SSD data buffer and write the first data to the MRAM via the MRAM controller, determine, by the data allocation circuit based on a characteristic of the first data, whether to save the first data to the MRAM or the NAND memory, and in response to determining saving the first data to the NAND memory, read the first data from the MRAM, write the first data to the NAND memory, and erase the first data from the MRAM.
    Type: Application
    Filed: April 30, 2023
    Publication date: April 11, 2024
    Inventors: Fei XUE, Wentao WU, Jiajing JIN, Xiang GAO, Jifeng WANG, Yuming XU, Jiu HENG, Hongzhong ZHENG
  • Patent number: 11947961
    Abstract: According to some example embodiments of the present disclosure, in a method for a memory lookup mechanism in a high-bandwidth memory system, the method includes: using a memory die to conduct a multiplication operation using a lookup table (LUT) methodology by accessing a LUT, which includes floating point operation results, stored on the memory die; sending, by the memory die, a result of the multiplication operation to a logic die including a processor and a buffer; and conducting, by the logic die, a matrix multiplication operation using computation units.
    Type: Grant
    Filed: November 30, 2022
    Date of Patent: April 2, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Peng Gu, Krishna T. Malladi, Hongzhong Zheng
  • Patent number: 11947471
    Abstract: A memory module includes at least two memory devices. Each of the memory devices perform verify operations after attempted writes to their respective memory cores. When a write is unsuccessful, each memory device stores information about the unsuccessful write in an internal write retry buffer. The write operations may have only been unsuccessful for one memory device and not any other memory devices on the memory module. When the memory module is instructed, both memory devices on the memory module can retry the unsuccessful memory write operations concurrently. Both devices can retry these write operations concurrently even though the unsuccessful memory write operations were to different addresses.
    Type: Grant
    Filed: June 28, 2022
    Date of Patent: April 2, 2024
    Assignee: Rambus Inc.
    Inventors: Hongzhong Zheng, Brent Haukness
  • Publication number: 20240104360
    Abstract: Near memory processing systems for graph neural network processing can include a central core coupled to one or more memory units. The memory units can include one or more controllers and a plurality of memory devices. The system can be configured for offloading aggregation, concentrate and the like operations from the central core to the controllers of the one or more memory units. The central core can sample the graph neural network and schedule memory accesses for execution by the one or more memory units. The central core can also schedule aggregation, combination or the like operations associated with one or more memory accesses for execution by the controller. The controller can access data in accordance with the data access requests from the central core. One or more computation units of the controller can also execute the aggregation, combination or the like operations associated with one or more memory access.
    Type: Application
    Filed: December 2, 2020
    Publication date: March 28, 2024
    Inventors: Tianchan GUAN, Dimin NIU, Hongzhong ZHENG, Shuangchen LI
  • Patent number: 11940922
    Abstract: A method of processing in-memory commands in a high-bandwidth memory (HBM) system includes sending a function-in-HBM instruction to the HBM by a HBM memory controller of a GPU. A logic component of the HBM receives the FIM instruction and coordinates the instructions execution using the controller, an ALU, and a SRAM located on the logic component.
    Type: Grant
    Filed: December 14, 2022
    Date of Patent: March 26, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Mu-Tien Chang, Krishna T. Malladi, Dimin Niu, Hongzhong Zheng
  • Publication number: 20240095179
    Abstract: A memory management method of a data processing system is provided. The memory management method includes: creating a first memory zone and a second memory zone related to a first node of a first server, wherein the first server is located in the data processing system, and the first node includes a processor and a first memory; mapping the first memory zone to the first memory; and mapping the second memory zone to a second memory of a second server, wherein the second server is located in the data processing system, and the processor is configured to access the second memory of the second server through an interface circuit of the first server and through an interface circuit of the second server.
    Type: Application
    Filed: December 13, 2022
    Publication date: March 21, 2024
    Inventors: DIMIN NIU, YIJIN GUAN, TIANCHAN GUAN, SHUANGCHEN LI, HONGZHONG ZHENG
  • Publication number: 20240094922
    Abstract: A data processing system includes a first server and a second server. The first server includes a first processor group, a first memory space and a first interface circuit. The second server includes a second processor group, a second memory space and a second interface circuit. The first memory space and the second memory space are allocated to the first processor group. The first processor group is configured to perform memory error detection to generate an error log corresponding to a memory error. When the memory error occurs in the second memory space, the first interface circuit is configured to send the error log to the second interface circuit, and the second processor group is configured to log the memory error according to the error log received by the second interface circuit. The data processing system is capable of realizing memory reliability architecture supporting operations across different servers.
    Type: Application
    Filed: December 12, 2022
    Publication date: March 21, 2024
    Inventors: DIMIN NIU, TIANCHAN GUAN, YIJIN GUAN, SHUANGCHEN LI, HONGZHONG ZHENG
  • Patent number: 11934669
    Abstract: A processor includes a plurality of memory units, each of the memory units including a plurality of memory cells, wherein each of the memory units is configurable to operate as memory, as a computation unit, or as a hybrid memory-computation unit.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: March 19, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dimin Niu, Shuangchen Li, Bob Brennan, Krishna T. Malladi, Hongzhong Zheng
  • Publication number: 20240078036
    Abstract: A memory module can include a hybrid media controller coupled to a volatile memory, a non-volatile memory, a non-volatile memory buffer and a set of memory mapped input/output (MMIO) register. The hybrid media controller can be configured for reading and writing data to a volatile memory of a memory mapped space of a memory module. The hybrid media controller can also be configured for reading and writing bulk data to a non-volatile memory of the memory mapped space. The hybrid media controller can also be configured for reading and writing data of a random-access granularity to the non-volatile memory of the memory mapped space. The hybrid media controller can also be configured for self-indexed moving data between the non-volatile memory and the volatile memory of the memory module.
    Type: Application
    Filed: December 24, 2020
    Publication date: March 7, 2024
    Inventors: Dimin NIU, Tianchan GUAN, Hongzhong ZHENG, Shuangchen LI
  • Patent number: 11921656
    Abstract: An apparatus may include a heterogeneous computing environment that may be controlled, at least in part, by a task scheduler in which the heterogeneous computing environment may include a processing unit having fixed logical circuits configured to execute instructions; a reprogrammable processing unit having reprogrammable logical circuits configured to execute instructions that include instructions to control processing-in-memory functionality; and a stack of high-bandwidth memory dies in which each may be configured to store data and to provide processing-in-memory functionality controllable by the reprogrammable processing unit such that the reprogrammable processing unit is at least partially stacked with the high-bandwidth memory dies. The task scheduler may be configured to schedule computational tasks between the processing unit, and the reprogrammable processing unit.
    Type: Grant
    Filed: January 17, 2022
    Date of Patent: March 5, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Krishna T. Malladi, Hongzhong Zheng
  • Patent number: 11921642
    Abstract: A cache memory includes cache lines to store information. The stored information is associated with physical addresses that include first, second, and third distinct portions. The cache lines are indexed by the second portions of respective physical addresses associated with the stored information. The cache memory also includes one or more tables, each of which includes respective table entries that are indexed by the first portions of the respective physical addresses. The respective table entries in each of the one or more tables are to store indications of the second portions of respective physical addresses associated with the stored information.
    Type: Grant
    Filed: November 14, 2022
    Date of Patent: March 5, 2024
    Assignee: RAMBUS INC.
    Inventors: Trung Diep, Hongzhong Zheng
  • Patent number: 11921638
    Abstract: According to some embodiments of the present invention, there is provided a hybrid cache memory for a processing device having a host processor, the hybrid cache memory comprising: a high bandwidth memory (HBM) configured to store host data; a non-volatile memory (NVM) physically integrated with the HBM in a same package and configured to store a copy of the host data at the HBM; and a cache controller configured to be in bi-directional communication with the host processor, and to manage data transfer between the HBM and NVM and, in response to a command received from the host processor, to manage data transfer between the hybrid cache memory and the host processor.
    Type: Grant
    Filed: June 6, 2022
    Date of Patent: March 5, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Krishna T. Malladi, Hongzhong Zheng
  • Publication number: 20240069954
    Abstract: The present application discloses a computing system and a memory sharing method for a computing system. The computing system includes a first host, a second host, a first memory extension device and a second memory extension device. The first memory extension device is coupled to the first host. The second memory extension device is coupled to the second host and the first memory extension device. The first host accesses a memory space of a memory of the second host through the first memory extension device and the second memory extension device according to a first physical address of the first host.
    Type: Application
    Filed: May 24, 2023
    Publication date: February 29, 2024
    Inventors: TIANCHAN GUAN, DIMIN NIU, YIJIN GUAN, HONGZHONG ZHENG
  • Publication number: 20240069754
    Abstract: The present application discloses a computing system and an associated method. The computing system includes a first host, a second host, a first memory extension device and a second memory extension device. The first host includes a first memory, and the second host includes a second memory. The first host has a plurality of first memory addresses corresponding to a plurality of memory spaces of the first memory, and a plurality of second memory addresses corresponding to a plurality of memory spaces of the second memory. The first memory extension device is coupled to the first host. The second memory extension device is coupled to the second host and the first memory extension device. The first host accesses the plurality of memory spaces of the second memory through the first memory extension device and the second memory extension device.
    Type: Application
    Filed: December 12, 2022
    Publication date: February 29, 2024
    Inventors: TIANCHAN GUAN, YIJIN GUAN, DIMIN NIU, HONGZHONG ZHENG
  • Publication number: 20240069755
    Abstract: The present application provides a computer system, a memory expansion device and a method for use in the computer system. The computer system includes multiple hosts and multiple memory expansion devices; the memory expansion devices correspond to the hosts in a one-to-one manner. Each host includes a CPU and a memory; each memory expansion device includes a first interface and multiple second interfaces. The first interface is configured to allow each memory expansion device to communicate with the corresponding CPU via a first coherence interconnection protocol, and the second interface is configured to allow each memory expansion device to communicate with a portion of memory expansion devices via a second coherence interconnection protocol. Any two memory expansion devices communicate with each other via at least two different paths, and the number of memory expansion devices that at least one of the two paths passes through is not more than one.
    Type: Application
    Filed: December 12, 2022
    Publication date: February 29, 2024
    Inventors: YIJIN GUAN, TIANCHAN GUAN, DIMIN NIU, HONGZHONG ZHENG
  • Publication number: 20240063200
    Abstract: The present disclosure relates to a hybrid bonding based integrated circuit (HBIC) device and its manufacturing method. In some embodiments, an exemplary HBIC device includes: a first die stack comprising one or more dies; and a second die stack integrated above the first die stack. The second die stack includes at least two memory dies communicatively connected to the first die stack by wire bonding.
    Type: Application
    Filed: February 6, 2020
    Publication date: February 22, 2024
    Inventors: Dimin NIU, Shuangchen LI, Tianchan GUAN, Hongzhong ZHENG
  • Publication number: 20240061780
    Abstract: A computer-implemented method for allocating memory bandwidth of multiple CPU cores in a server includes: receiving an access request to a last level cache (LLC) shared by the multiple CPU cores in the server, the access request being sent from a core with a private cache holding copies of frequently accessed data from a memory; determining whether the access request is an LLC hit or an LLC miss; and controlling a memory bandwidth controller based on the determination. The memory bandwidth controller performs a memory bandwidth throttling to control a request rate between the private cache and the last level cache. The LLC hit of the access request causes the memory bandwidth throttling initiated by the memory bandwidth controller to be disabled and the LLC miss of the access request causes the memory bandwidth throttling initiated by the memory bandwidth controller to be enabled.
    Type: Application
    Filed: August 16, 2023
    Publication date: February 22, 2024
    Inventors: Lide DUAN, Bowen HUANG, Qichen ZHANG, Shengcheng WANG, Yen-Kuang CHEN, Hongzhong ZHENG
  • Publication number: 20240054096
    Abstract: The present disclosure discloses a processor. The processor is used to perform parallel computation and includes a logic die and a memory die. The logic die includes a plurality of processor cores and a plurality of networks on chip, wherein each processor core is programmable. The plurality of networks on chip are correspondingly connected to the plurality of processor cores, so that the plurality of processor cores form a two-dimensional mesh network. The memory die and the processor core are stacked vertically, wherein the memory die includes a plurality of memory tiles, and when the processor performs the parallel computation, the plurality of memory tiles do not have cache coherency; wherein, the plurality of memory tiles correspond to the plurality of processor cores in a one-to-one or one-to-many manner.
    Type: Application
    Filed: December 12, 2022
    Publication date: February 15, 2024
    Inventors: SHUANGCHEN LI, ZHE ZHANG, DIMIN NIU, HONGZHONG ZHENG
  • Publication number: 20240054082
    Abstract: A method of transferring data between a memory controller and at least one memory module via a primary data bus having a primary data bus width is disclosed. The method includes accessing a first one of a memory device group via a corresponding data bus path in response to a threaded memory request from the memory controller. The accessing results in data groups collectively forming a first data thread transferred across a corresponding secondary data bus path. Transfer of the first data thread across the primary data bus width is carried out over a first time interval, while using less than the primary data transfer continuous throughput during that first time interval. During the first time interval, at least one data group from a second data thread is transferred on the primary data bus.
    Type: Application
    Filed: August 29, 2023
    Publication date: February 15, 2024
    Inventors: Hongzhong Zheng, Frederick A Ware