Patents by Inventor Lide Duan

Lide Duan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11360906
    Abstract: The devices within an inter-device processing system maintain data coherency in the last level caches of the system as a cache line of data is shared between the devices by utilizing a directory in one of the devices that tracks the coherency protocol states of the memory addresses in the last level caches of the system.
    Type: Grant
    Filed: August 14, 2020
    Date of Patent: June 14, 2022
    Assignee: Alibaba Group Holding Limited
    Inventors: Lide Duan, Hongyu Liu, Hongzhong Zheng, Yen-Kuang Chen
  • Patent number: 11355163
    Abstract: The systems and methods are configured to efficiently and effectively include processing capabilities in memory. In one embodiment, a processing in memory (PIM) chip a memory array, logic components, and an interconnection network. The memory array is configured to store information. In one exemplary implementation the memory array includes storage cells and array periphery components. The logic components can be configured to process information stored in the memory array. The interconnection network is configured to communicatively couple the logic components. The interconnection network can include interconnect wires, and a portion of the interconnect wires are located in a metal layer area that is located above the memory array.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: June 7, 2022
    Assignee: Alibaba Group Holding Limited
    Inventors: Wei Han, Shuangchen Li, Lide Duan, Hongzhong Zheng, Dimin Niu, Yuhao Wang, Xiaoxin Fan
  • Publication number: 20220121586
    Abstract: A dual-model memory interface of a computing system is provided, configurable to present memory interfaces having differently-graded bandwidth capacity to different processors of the computing system. A mode switch controller of the memory interface controller, based on at least an arbitration rule written to a configuration register, switches the memory interface controller between a narrow-band mode and a wide-band mode. In each mode, the memory interface controller disables either a plurality of narrow-band memory interfaces of the memory interface controller according to a first bus standard, or a wide-band memory interface of the memory interface controller according to a second bus standard. The memory interface controller virtualizes a plurality of system memory units of the computing system as a virtual wide-band memory unit according to the second bus standard, or virtualizes a system memory unit of the computing system as a virtual narrow-band memory unit according to the first bus standard.
    Type: Application
    Filed: October 16, 2020
    Publication date: April 21, 2022
    Applicant: Alibaba Group Holding Limited
    Inventors: Yuhao Wang, Wei Han, Dimin Niu, Lide Duan, Shuangchen Li, Fei Xue, Hongzhong Zheng
  • Publication number: 20220101887
    Abstract: The systems and methods are configured to efficiently and effectively include processing capabilities in memory. In one embodiment, a processing in memory (PIM) chip a memory array, logic components, and an interconnection network. The memory array is configured to store information. In one exemplary implementation the memory array includes storage cells and array periphery components. The logic components can be configured to process information stored in the memory array. The interconnection network is configured to communicatively couple the logic components. The interconnection network can include interconnect wires, and a portion of the interconnect wires are located in a metal layer area that is located above the memory array.
    Type: Application
    Filed: September 29, 2020
    Publication date: March 31, 2022
    Inventors: Wei HAN, Shuangchen LI, Lide DUAN, Hongzhong ZHENG, Dimin NIU, Yuhao WANG, Xiaoxin FAN
  • Publication number: 20220058150
    Abstract: A system-in-package architecture in accordance with aspects includes a logic die and one or more memory dice coupled together in a three-dimensional slack. The logic die can include one or more global building blocks and a plurality of local building blocks. The number of local building blocks can be scalable. The local building blocks can include a plurality of engines and memory controllers. The memory controllers can be configured to directly couple one or more of the engines to the one or more memory dice. The number and type of local building blocks, and the number and types of engines and memory controllers can be scalable.
    Type: Application
    Filed: August 20, 2020
    Publication date: February 24, 2022
    Inventors: Lide DUAN, Wei HAN, Yuhao WANG, Fei XUE, Yuanwei FANG, Hongzhong ZHENG
  • Publication number: 20220050786
    Abstract: The devices within an inter-device processing system maintain data coherency in the last level caches of the system as a cache line of data is shared between the devices by utilizing a directory in one of the devices that tracks the coherency protocol states of the memory addresses in the last level caches of the system.
    Type: Application
    Filed: August 14, 2020
    Publication date: February 17, 2022
    Inventors: Lide DUAN, Hongyu LIU, Hongzhong ZHENG, Yen-Kuang CHEN
  • Publication number: 20220051086
    Abstract: The present disclosure provides an accelerator for processing a vector or matrix operation. The accelerator comprises a vector processing unit comprising a plurality of computation units having circuitry configured to process a vector operation in parallel; a matrix multiplication unit comprising a first matrix multiplication operator, a second matrix multiplication operator, and an accumulator, the first matrix multiplication operator and the second matrix multiplication operator having circuitry configured to process a matrix operation and the accumulator having circuitry configured to accumulate output results of the first matrix multiplication operator and the second matrix multiplication operator; and a memory storing input data for the vector operation or the matrix operation and being configured to communicate with the vector processing unit and the matrix multiplication unit.
    Type: Application
    Filed: July 22, 2021
    Publication date: February 17, 2022
    Inventors: Fei XUE, Wei HAN, Yuhao WANG, Fei SUN, Lide DUAN, Shuangchen LI, Dimin NIU, Tianchan GUAN, Linyong HUANG, Zhaoyang DU, Hongzhong ZHENG
  • Patent number: 11188471
    Abstract: A cache coherency mode includes: in response to a read request from a device in the host-device system for an instance of the shared data, sending the instance of the shared data from the host device to that device; and, in response to write request from a device, storing data associated with the write request in the cache of the host device. Shared data is pinned in the cache of the host device, and is not cached in any of the other devices in the host-device system. Because there is only one cached copy of the shared data in the host-device system, the devices in that system are cache coherent.
    Type: Grant
    Filed: April 3, 2020
    Date of Patent: November 30, 2021
    Assignee: Alibaba Group Holding Limited
    Inventors: Lide Duan, Dimin Niu, Hongyu Liu, Shuangchen Li, Hongzhong Zheng
  • Publication number: 20210311878
    Abstract: A cache coherency mode includes: in response to a read request from a device in the host-device system for an instance of the shared data, sending the instance of the shared data from the host device to that device; and, in response to write request from a device, storing data associated with the write request in the cache of the host device. Shared data is pinned in the cache of the host device, and is not cached in any of the other devices in the host-device system. Because there is only one cached copy of the shared data in the host-device system, the devices in that system are cache coherent.
    Type: Application
    Filed: April 3, 2020
    Publication date: October 7, 2021
    Inventors: Lide DUAN, Dimin NIU, Hongyu LIU, Shuangchen LI, Hongzhong ZHENG
  • Patent number: 11068200
    Abstract: Methods and systems are provided for improving memory control. A memory architecture includes a plurality of memory units and an interface. A respective memory unit of the plurality of memory units is configured with a Processing-In-Memory (PIM) architecture. The interface includes a plurality of lines. The interface is coupled between the plurality of memory units and a host. The interface is configured to receive one or more signals from a host via the plurality of lines. The respective memory unit of the plurality of memory units is coupled with a respective line of the plurality of lines, and the respective memory unit is further configured to receive a respective signal of the one or more signals via the interface so as to be individually selected by the host.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: July 20, 2021
    Assignee: Alibaba Group Holding Limited
    Inventors: Dimin Niu, Lide Duan, Yuhao Wang, Xiaoxin Fan, Zhibin Xiao
  • Publication number: 20210173784
    Abstract: Memory control methods and systems are provided. A memory architecture includes one or more accelerators, a controller, and a transactional interface. A respective accelerator of the one or more accelerators includes a respective storage area configured to store data and a respective computation unit configured to perform computation. The respective storage area and the respective computation unit are configured to interact with each other. The controller is coupled with the one or more accelerators. The controller is configured to control the one or more accelerators, receive a command from a host, and perform an operation in response to receiving the command. The transactional interface is coupled between the controller and the host and includes a command and address signal channel, which is configured to transfer command and address signals from the host to the controller.
    Type: Application
    Filed: December 6, 2019
    Publication date: June 10, 2021
    Inventors: Dimin Niu, Lide Duan, Hongzhong Zheng
  • Publication number: 20210157516
    Abstract: Methods and systems are provided for improving memory control. A memory architecture includes a plurality of memory units and an interface. A respective memory unit of the plurality of memory units is configured with a Processing-In-Memory (PIM) architecture. The interface includes a plurality of lines. The interface is coupled between the plurality of memory units and a host. The interface is configured to receive one or more signals from a host via the plurality of lines. The respective memory unit of the plurality of memory units is coupled with a respective line of the plurality of lines, and the respective memory unit is further configured to receive a respective signal of the one or more signals via the interface so as to be individually selected by the host.
    Type: Application
    Filed: November 27, 2019
    Publication date: May 27, 2021
    Inventors: Dimin Niu, Lide Duan, Yuhao Wang, Xiaoxin Fan, Zhibin Xiao