Patents by Inventor Lingming Yang

Lingming Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250231889
    Abstract: Systems and methods are provided for using a 2.5D PHY and a 3D PHY for communications between a memory device and a host device. The memory device includes a 2.5D PHY for communications with the host device through a predefined communication interface and a 3D PHY for communications with the host device through a customized communication interface. The 2.5D PHY of the memory device is used for communications through the predefined communication interface when the customized communication interface is not available or undesired (e.g., during testing of the memory device).
    Type: Application
    Filed: November 18, 2024
    Publication date: July 17, 2025
    Inventors: Sujeet Ayyapureddi, Lingming Yang, Raymond Wei-tang Chang, Raghukiran Sreeramaneni, Dong Uk Lee, Nevil N. Gajera
  • Publication number: 20250231877
    Abstract: System-in-packages (SiPs) having hybrid high bandwidth memory (HBM) devices, and associated systems and methods, are disclosed herein. In some embodiments, the SiP includes a base substrate, as well as a processing device and a hybrid high-bandwidth memory (HBM) device each carried by the base substrate. The processing device includes a processing unit and a first cache memory associated with a first level of a cache hierarchy. The hybrid HBM device is electrically coupled to the processing unit through a SiP bus in the base substrate. Further, the hybrid HBM device includes an interface die, one or more memory dies carried by the interface die, and a shared bus electrically coupled to the interface die and each of the memory dies. The hybrid HBM device also includes a second cache memory formed on the interface die that is associated with a second level of the cache hierarchy.
    Type: Application
    Filed: December 2, 2024
    Publication date: July 17, 2025
    Inventors: Raghukiran Sreeramaneni, Lingming Yang, Nevil N. Gajera
  • Publication number: 20250231895
    Abstract: System-in-package (SiP) devices, and associated systems and methods, are disclosed herein. In some embodiments, the SiP devices include a base substrate, as well as a processing unit a high bandwidth memory (HBM) device, and a high bandwidth storage (HBS) device each carried by the base substrate. The HBM device is electrically coupled to the processing unit through a SiP bus and includes a first interface die, one or more volatile memory dies, and an HBM bus electrically coupled to the first interface die and each of the one or more volatile memory dies. The HBS device is electrically coupled to the HBM device through the SiP bus and includes a second interface die one or more non-volatile memory dies, and an HBS bus electrically coupled to the second interface die and each of the one or more non-volatile memory dies.
    Type: Application
    Filed: December 31, 2024
    Publication date: July 17, 2025
    Inventors: Lingming Yang, Nevil N. Gajera
  • Publication number: 20250157533
    Abstract: System-in-package (SiP) having functional high bandwidth memory (HBM) devices, and associated systems and methods are disclosed herein. In some embodiments, the functional HBM devices can include a controller die, one or more volatile memory dies, a flash memory die, and an HBM bus communicably coupled to each of the controller, volatile memory, and flash memory dies. The flash memory die can include one or more word lines that each have multiple programmable memory cells, as well as multiple bit lines. Each of the bit lines is coupled to a corresponding programmable memory cell from each of the one or more word lines. During operation, the controller die is configured to control the volatile memory dies and the flash memory die, through a shared bus therebetween, to implement one or more neural network computing operations within the functional HBM device.
    Type: Application
    Filed: October 8, 2024
    Publication date: May 15, 2025
    Inventors: Lingming Yang, Raghukiran Sreeramaneni, Nevil N. Gajera
  • Publication number: 20250123976
    Abstract: An apparatus including a high bandwidth memory circuit and associated systems and methods are disclosed herein. The high bandwidth memory circuit can include two or more physical layer circuits to communicate with neighboring devices. The high bandwidth memory circuit can broadcast a status to the neighboring devices. The neighboring devices can be configured according to the operating demands of the high bandwidth memory circuit.
    Type: Application
    Filed: July 30, 2024
    Publication date: April 17, 2025
    Inventors: Lingming Yang, Raghukiran Sreeramaneni, Nevil N. Gajera
  • Publication number: 20250118366
    Abstract: Systems, methods and apparatus to read target memory cells having an associated reference memory cell configured to be representative of drift or changes in the threshold voltages of the target memory cells. The reference cell is programmed to a predetermined threshold level when the target cells are programmed to store data. In response to a command to read the target memory cells, estimation of a drift of the threshold voltage of the reference is performed in parallel with applying an initial voltage pulse to read the target cells. Based on a result of the drift estimation, voltage pulses used to read the target cells can be modified and/or added to account for the drift estimated using the reference cell.
    Type: Application
    Filed: December 17, 2024
    Publication date: April 10, 2025
    Inventors: Karthik Sarpatwari, Nevil N. Gajera, Lingming Yang, John F. Schreck
  • Publication number: 20250087537
    Abstract: High-bandwidth memory (HBM) devices and associated systems and methods are disclosed herein. In some embodiments, the HBM devices include a first die (e.g., an interface die), a plurality of second dies (e.g., memory dies) carried by the first die communicably coupled to the first die through a plurality of HBM bus through substrate vias (TSVs). The HBM devices also include an HBM testing component carried at least partially by an upper surface of an uppermost second die. The HBM testing component provides access to the first and second dies through an uppermost surface of the HBM device to test the HBM device during manufacturing. For example, the HBM testing component allows the first and second dies to be tested after the HBM device is mounted to a base substrate of a system-in-package without requiring any footprint for access pins on the base substrate.
    Type: Application
    Filed: July 30, 2024
    Publication date: March 13, 2025
    Inventors: Lingming Yang, Raghukiran Sreeramaneni, Nevil N. Gajera
  • Publication number: 20250061960
    Abstract: A stacked memory device (e.g., a high-bandwidth memory (HBM) device) having a storage component is disclosed. The stacked memory device can include a first logic die, one or more memory dies, a second logic die, and one or more storage dies. The first logic die is coupled with the one or more memory dies and the second logic die through TSVs. The second logic die is coupled with the one or more storage dies through additional TSVs. The first logic die can issue commands to the one or more memory dies that cause the one or more memory dies to perform operations (e.g., read/write operations). The first logic die can also issue commands to the second logic die that cause the second logic die to issue commands to the one or more storage dies to perform operations.
    Type: Application
    Filed: July 26, 2024
    Publication date: February 20, 2025
    Inventors: Lingming Yang, Raghukiran Sreeramaneni, Nevil N. Gajera
  • Patent number: 12210413
    Abstract: Methods, systems, and devices for data correction schemes with reduced device overhead are described. A memory system may include a set of memory devices storing data and check codes associated with the data. The memory system may additionally include a single parity device storing parity information associated with the data. During a read operation of a set of data, a controller of the memory system may detect an error in data associated with a first check code, the data including two or more subsets of the set of data received from two or more corresponding memory devices. The controller may generate candidate data corresponding to one of the two or more subsets using the parity information and remaining subsets of the set of data. Then the controller may determine whether the candidate data is correct by comparing the first check code with a check value generated using the candidate data.
    Type: Grant
    Filed: June 20, 2023
    Date of Patent: January 28, 2025
    Assignee: Micron Technology, Inc.
    Inventors: Joseph M. McCrate, Marco Sforzin, Paolo Amato, Lingming Yang, Nevil N. Gajera
  • Publication number: 20250031386
    Abstract: High-bandwidth memory (HBM) devices and associated systems and methods are disclosed herein. In some embodiments, the HBM devices include a first die, a plurality of second dies carried by a signal routing region of the first die, and active through substrate vias (TSVs) positioned within a footprint of the signal routing region. The active TSVs extend from a first metallization layer in the first die to a second metallization layer in an uppermost memory die. The HBM devices also include a cooling network configured to transport heat away from the first die. For example, the cooling network can include a thermally conductive layer carried by a thermal region of the first die and cooling TSVs in contact with the thermally conductive layer. The thermally conductive TSVs extend from the thermally conductive layer to an elevation at or above a top surface of the uppermost memory die.
    Type: Application
    Filed: July 19, 2024
    Publication date: January 23, 2025
    Inventors: Lingming Yang, Raghukiran Sreeramaneni, Nevil N. Gajera
  • Publication number: 20250022849
    Abstract: System-in-packages (SiPs) having combined high bandwidth memory (HBM) devices, and associated systems and methods, are disclosed herein. In some embodiments, the SiP includes a base substrate (e.g., a silicon interposer), a processing unit carried by the base substrate, and a HBM device carried by the base substrate. The combined HBM device can be electrically coupled to the processing unit through one or more traces. Further, the combined HBM device can include an interface die, one or more volatile memory dies carried by the interface die (e.g., a volatile, main memory component), and one or more non-volatile memory dies carried by the one or more memory dies. The combined HBM device can also include a shared bus that is electrically coupled to the interface die, the volatile memory dies, and the non-volatile memory dies to establish communication paths therebetween.
    Type: Application
    Filed: June 20, 2024
    Publication date: January 16, 2025
    Inventors: Lingming Yang, Raghukiran Sreeramaneni, Nevil N. Gajera
  • Patent number: 12176029
    Abstract: Systems, methods and apparatus to read target memory cells having an associated reference memory cell configured to be representative of drift or changes in the threshold voltages of the target memory cells. The reference cell is programmed to a predetermined threshold level when the target cells are programmed to store data. In response to a command to read the target memory cells, estimation of a drift of the threshold voltage of the reference is performed in parallel with applying an initial voltage pulse to read the target cells. Based on a result of the drift estimation, voltage pulses used to read the target cells can be modified and/or added to account for the drift estimated using the reference cell.
    Type: Grant
    Filed: November 3, 2022
    Date of Patent: December 24, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Karthik Sarpatwari, Nevil N. Gajera, Lingming Yang, John F. Schreck
  • Publication number: 20240371410
    Abstract: An apparatus including a high bandwidth memory circuit and associated systems and methods are disclosed herein. The apparatus may include multiple HBM cubes connected to a processor, such as a GPU. The HBM cubes may be connected in series or in parallel. One or more of the HBM cubes can include a secondary communication circuit configured to facilitate the expanded connection between the multiple cubes.
    Type: Application
    Filed: April 22, 2024
    Publication date: November 7, 2024
    Inventors: Lingming Yang, Raghukiran Sreeramaneni, Nevil N. Gajera
  • Publication number: 20240281390
    Abstract: A memory device includes a stack of eight memory dies having an 8N architecture and a stack of four memory dies having a 4N architecture. A first half and a second half of the stack of eight memory dies can each include 32 channels divided equally across the first half of dies and across the second half of dies. Banks of each of the 32 channels on the first half of dies can be associated with respective first pseudo channels. Banks of each of the 32 channels on the second half of dies can be associated with respective second pseudo channels. The stack of four memory dies can include the 32 channels divided equally amongst the dies, and the banks of each of the 32 channels on the stack of four memory dies can be divided equally across the respective first and second pseudo channels.
    Type: Application
    Filed: January 11, 2024
    Publication date: August 22, 2024
    Inventors: Dong Uk Lee, Sujeet Ayyapureddi, Lingming Yang, Tyler J. Gomm
  • Patent number: 12032443
    Abstract: Systems, apparatuses, and methods can include a multi-stage cache for providing high reliability, availability, and serviceability (RAS). The multi-stage cache memory comprises a shadow DRAM, which is provided on a volatile main memory module, coupled to a memory controller cache, which is provided on a memory controller. During a first write operation, the memory controller writes data with a strong error correcting code (ECC) from the memory controller cache to the shadow DRAM without writing a RAID (Redundant Arrays of Inexpensive Disks) parity data. During a second write operation, the memory controller writes the data with the strong ECC and writes the RAID parity data from the shadow DRAM to a memory device provided on the volatile main memory module.
    Type: Grant
    Filed: January 18, 2023
    Date of Patent: July 9, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Sandeep Krishna Thirumala, Lingming Yang, Amitava Majumdar, Nevil Gajera
  • Patent number: 12019516
    Abstract: Provided is a memory system comprising a plurality of memory channels each having a parity bit, a redundant array of independent devices (RAID) parity channel, and a controller of the memory system. The controller is configured to receive a block of data for storage in the memory channels and determine at least one of (i) when a data traffic demand on the memory channels is high and (ii) when a data traffic demand on the memory channels is low. Upon determining the data traffic demand is low, writing the block of data for storage in the memory channels and concurrently updating the parity bits and the RAID parity channel for the stored block of data. Upon determining the data traffic demand is high, only writing the data for storage in the memory channels.
    Type: Grant
    Filed: August 24, 2022
    Date of Patent: June 25, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Lingming Yang, Amitava Majumdar, Sandeep Krishna Thirumala, Nevil Gajera
  • Patent number: 12013756
    Abstract: Provided is a memory system including a plurality of memory submodules and a controller. Each submodule comprises a plurality of memory channels, each channel having a parity bit and a redundant array of independent devices (RAID) parity channel. The controller is configured to receive a block of data for storage in the plurality of memory submodules and determine whether a level of data traffic demand for a first of the plurality of submodules is high or low. When the data traffic demand is low, (i) writing a portion of the block of data in the first of the plurality of submodules and (ii) concurrently updating the parity bit and the RAID parity channel associated with the block of data. When the data traffic demand is high, (i) only writing the portion of the block of data in the first of the plurality of submodules and (ii) deferring updating of the parity bits and the RAID parity channel associated with the block of data.
    Type: Grant
    Filed: August 24, 2022
    Date of Patent: June 18, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Lingming Yang, Amitava Majumdar, Sandeep Krishna Thirumala, Nevil Gajera
  • Patent number: 11942139
    Abstract: The present disclosure includes apparatuses, methods, and systems for performing refresh operations on memory cells. An embodiment includes a memory having a group of memory cells and one or more additional memory cells whose data state is indicative of whether to refresh the group of memory cells, and circuitry configured to apply a first voltage pulse to the group of memory cells to sense a data state of the memory cells of the group, apply, while the first voltage pulse is applied to the group of memory cells, a second voltage pulse having a greater magnitude than the first voltage pulse to the one or more additional memory cells to sense a data state of the one or more additional memory cells, and determine whether to perform a refresh operation on the group of memory cells based on the sensed data state of the one or more additional memory cells.
    Type: Grant
    Filed: December 6, 2022
    Date of Patent: March 26, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Karthik Sarpatwari, Lingming Yang, Nevil N. Gajera, John Christopher M. Sancon
  • Publication number: 20240004756
    Abstract: Methods, systems, and devices for data correction schemes with reduced device overhead are described. A memory system may include a set of memory devices storing data and check codes associated with the data. The memory system may additionally include a single parity device storing parity information associated with the data. During a read operation of a set of data, a controller of the memory system may detect an error in data associated with a first check code, the data including two or more subsets of the set of data received from two or more corresponding memory devices. The controller may generate candidate data corresponding to one of the two or more subsets using the parity information and remaining subsets of the set of data. Then the controller may determine whether the candidate data is correct by comparing the first check code with a check value generated using the candidate data.
    Type: Application
    Filed: June 20, 2023
    Publication date: January 4, 2024
    Inventors: Joseph M. McCrate, Marco Sforzin, Paolo Amato, Lingming Yang, Nevil N. Gajera
  • Patent number: 11782830
    Abstract: This document describes apparatuses and techniques for cache memory with randomized eviction. In various aspects, a cache memory randomly selects a cache line for eviction and/or replacement. The cache memory may also support multi-occupancy whereby the cache memory enters data reused from another cache line to replace the data of the randomly evicted cache line. By so doing, the cache memory may operate in a nondeterministic fashion, which may increase a probability of data remaining in the cache memory for subsequent requests.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: October 10, 2023
    Assignee: Micron Technologies, Inc.
    Inventors: Amitava Majumdar, Sandeep Krishna Thirumala, Lingming Yang, Karthik Sarpatwari, Nevil N. Gajera