Patents Examined by Kaushikkumar M Patel
  • Patent number: 12379842
    Abstract: A data processing system includes a host including a host memory storing file systems, each file system including file pages, and storage devices. Each storage device includes a memory device including memory blocks, and a memory controller dividing the memory blocks into superblocks, and controlling a memory operation of the memory device based on a result of dividing the memory blocks. The host requests a first storage device of the storage devices to write a first file page of a first file system of the file systems to a first memory block of the first storage device, and requests a second storage device of the storage devices to write a second file page of the first file system of the file systems to a first memory block of the second storage device. The first file system is configured in a RAID manner using at least some of the storage devices.
    Type: Grant
    Filed: May 26, 2023
    Date of Patent: August 5, 2025
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Joo Young Hwang
  • Patent number: 12367146
    Abstract: A memory device includes a memory array with first and second memory regions, multiple communication ports and coherency control circuitry. The communication ports couple the memory device to host computers, enabling a first host to write a data block to the second region, write a message, including a data descriptor of the data block, to the first or second region, and write message metadata, associated with the message, to the first region, and also to enable a second host to read the message metadata, the data descriptor and the associated data block. The coherency control circuitry controls coherency of data in the first region, including sending an invalidation request to the second host to invalidate a copy of the message metadata stored in a local cache of the second host. The invalidation request is sent in response to the first host writing the message metadata to the first region.
    Type: Grant
    Filed: March 14, 2024
    Date of Patent: July 22, 2025
    Assignee: Arm Limited
    Inventors: David Alan Boles, David Joseph Hawkins, Sandipkumar Ladhani
  • Patent number: 12360901
    Abstract: Systems and methods are disclosed including a processing device operatively coupled to memory device. The processing device perform operations comprising receiving, from a memory sub-system controller, a first read command and a second read command; determining that the memory device is in a suspended state; and responsive to determining that a first address range specified by the first read command does not overlap with a second address range specified by the second read command, issuing, to the memory device, the first read command and the second read command collectively.
    Type: Grant
    Filed: February 2, 2023
    Date of Patent: July 15, 2025
    Assignee: Micron Technology, Inc.
    Inventors: Sundararajan N. Sankaranarayanan, Eric Lee
  • Patent number: 12339788
    Abstract: Embodiments are provided for protecting boot block space in a memory device. Such a memory device may include a memory array having a protected portion and a serial interface controller. The memory device may have a register that enables or disables access to the portion when data indicating whether to enable or disable access to the portion is written into the register via a serial data in (SI) input.
    Type: Grant
    Filed: January 8, 2024
    Date of Patent: June 24, 2025
    Inventor: Theodore T. Pekny
  • Patent number: 12332792
    Abstract: A system may include multiple coherent agents, where a given coherent agent includes one or more caches configured to cache data. Memory controller circuitry may control one or more memory circuits from which the one or more caches are configured to cache data and maintain a directory that tracks which of the multiple coherent agent circuits is caching copies of a plurality of cache blocks and states of the cached copies in the multiple coherent agent circuits. A first agent may transmit a first request for a first cache block. The first agent may store, in request buffer circuitry, information corresponding to the first request then detect a second snoop from a second agent circuit to the first cache block. The first agent may absorb the second snoop, including to store information corresponding to the second snoop with the information corresponding to the first request in the request buffer circuitry.
    Type: Grant
    Filed: February 20, 2024
    Date of Patent: June 17, 2025
    Assignee: Apple Inc.
    Inventors: James Vash, Gaurav Garg, Brian P. Lilly, Ramesh B. Gunna, Steven R. Hutsell, Lital Levy-Rubin, Per H. Hammarlund, Harshavardhan Kaushikkar
  • Patent number: 12332797
    Abstract: A method of operating a storage module, the method including setting a characteristic value based on information received from a host, the information including information related to a size of write data in units of cache lines, and successively receiving the write data in units of the cache lines based on a single write command received from the host.
    Type: Grant
    Filed: April 27, 2023
    Date of Patent: June 17, 2025
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Doohwan Oh, Wonjae Shin, Eunbyeol Ko
  • Patent number: 12327021
    Abstract: A storage device including a memory interface chip. In some embodiments, the storage device includes: a controller integrated circuit; a first memory die; and a first converter integrated circuit, the first converter integrated circuit having a first external interface and a second external interface, the first external interface being a serial interface, the first external interface being connected to the controller integrated circuit, and the second external interface being a memory interface connecting the first converter integrated circuit to the first memory die.
    Type: Grant
    Filed: August 26, 2022
    Date of Patent: June 10, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Young deok Kim, Pyeongwoo Lee, Vipin Kumar Agrawal
  • Patent number: 12326811
    Abstract: In part, the disclosure relates to a fault tolerant system. The system may include one or more shared memory complexes, each memory complex comprising a group of M computer-readable memory storage devices; one or more cache coherent switches comprising two or more host ports and one or more downstream device ports, the cache coherent switch in electrical communication with the one or more shared memory storage device; a first management processor in electrical communication with the cache coherent switch; a first compute node comprising a first processor and a first cache, the first compute node in electrical communication with the one or more cache coherent switches and the one or more shared memory complexes; a second compute node comprising a second processor and a second cache, the second compute node in electrical communication with the one or more cache coherent switches and the one or more shared memory complexes.
    Type: Grant
    Filed: November 30, 2022
    Date of Patent: June 10, 2025
    Assignee: STRATUS TECHNOLOGIES IRELAND LTD.
    Inventors: Andrew Alden, Chester Pawlowski, Christopher Cotton, John Chaves
  • Patent number: 12321283
    Abstract: According to one embodiment, when a read request received from a host includes a first identifier indicative of a first region, a memory system obtains a logical address from the received read request, obtains a physical address corresponding to the obtained logical address from a logical-to-physical address translation table which manages mapping between logical addresses and physical addresses of the first region, and reads data from the first region, based on the obtained physical address. When the received read request includes a second identifier indicative of a second region, the memory system obtains physical address information from the read request, and reads data from the second region, based on the obtained physical address information.
    Type: Grant
    Filed: March 1, 2024
    Date of Patent: June 3, 2025
    Assignee: Kioxia Corporation
    Inventors: Hideki Yoshida, Shinichi Kanno
  • Patent number: 12299299
    Abstract: A memory system includes a computing device and a memory device. The memory device includes: first connection ports; a second connection port; a compute express link switch connected to the first connection ports; and a processor connected to the second connection port and the compute express link switch. The processor is configured to obtain a current configuration of the first connection ports from the computing device through the second connection port to update an original configuration stored by the processor, wherein the current configuration indicates an electronic device connected to each of the first connection ports.
    Type: Grant
    Filed: March 11, 2024
    Date of Patent: May 13, 2025
    Assignees: INVENTEC (PUDONG) TECHNOLOGY CORPORATION, INVENTEC CORPORATION
    Inventors: Kuo-Shu Chiu, Liang-Hsi Chien, Jhih-Ting Chen, Chain Wu Lee
  • Patent number: 12299285
    Abstract: A buffer integrated circuit (IC) chip is disclosed. The buffer IC chip includes host interface circuitry to receive a request from at least one host. The request includes at least one command to perform a memory compression operation on first uncompressed data that is stored in a first memory region. Compression circuitry, in response to the at least one command, compresses the first uncompressed data to first compressed data. The first compressed data is transferred to a second memory region.
    Type: Grant
    Filed: July 6, 2023
    Date of Patent: May 13, 2025
    Assignee: Rambus Inc.
    Inventors: Evan Lawrence Erickson, Christopher Haywood, Craig E. Hampel
  • Patent number: 12292821
    Abstract: A video memory management method is provided. The method includes: determining priorities of a plurality of machine learning tasks executed by a graphics processing unit; if video memory resources are to be allocated for a higher-priority task, and an amount of allocatable video memory resources is smaller than an amount of video memory resources required by the higher-priority task, releasing at least a part of video memory resources occupied by a lower-priority task; and allocating video memory resources to the higher-priority task, wherein the higher-priority task is executed at least according to tensor data in a video memory space.
    Type: Grant
    Filed: April 25, 2023
    Date of Patent: May 6, 2025
    Assignee: Alibaba Group Holding Limited
    Inventors: Wengcong Xiao, Shiru Ren, Yong Li
  • Patent number: 12287739
    Abstract: Address translation is performed to translate a virtual address targeted by a memory request (e.g., a load or memory request for data or an instruction) to a physical address. This translation is performed using an address translation buffer, e.g., a translation lookaside buffer (TLB). One or more actions are taken to reduce data access latencies for memory requests in the event of a TLB miss where the virtual address to physical address translation is not in the TLB. Examples of actions that are performed in various implementations in response to a TLB miss include bypassing level 1 (L1) and level 2 (L2) caches in the memory system, and speculatively sending the memory request to the L2 cache while checking whether the memory request is satisfied by the L1 cache.
    Type: Grant
    Filed: December 9, 2022
    Date of Patent: April 29, 2025
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Jagadish B Kotra, John Kalamatianos
  • Patent number: 12277069
    Abstract: Compressing memory addresses within an execution trace via reference to a translation lookaside buffer (TLB) entry. A microprocessor identifies a TLB entry within a TLB slot, the TLB entry mapping a virtual memory page to a physical memory page. The microprocessor initiates logging of the TLB entry by initiating logging of at least a virtual address of the virtual memory page, and an identifier that uniquely identifies the TLB entry from among a plurality of live TLB entries. Subsequently, the microprocessor identifies a cache entry within a memory cache slot, the cache entry comprising a physical memory address corresponding to a cache line. The microprocessor initiates logging of the cache entry by matching a physical memory page identification portion of the physical memory address with the TLB entry, and then initiates logging of at least the identifier for the TLB entry and an offset portion.
    Type: Grant
    Filed: April 3, 2024
    Date of Patent: April 15, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Jordi Mola
  • Patent number: 12265457
    Abstract: Methods, computer program products, computer systems, and the like are disclosed that provide for scalable deduplication in an efficient and effective manner. For example, such methods, computer program products, and computer systems can include determining whether a source data store and a replicated data store are unsynchronized and, in response to a determination that the source data store and the replicated data store are unsynchronized, performing a resynchronization operation. The source data stored in the source data store is replicated to replicated data in the replicated data store. The resynchronization operation resynchronizes the source data and the replicated data.
    Type: Grant
    Filed: December 20, 2022
    Date of Patent: April 1, 2025
    Assignee: Cohesity Inc.
    Inventors: Rushikesh Patil, Sunil Hasbe
  • Patent number: 12260085
    Abstract: A write pattern of a host device is used to dynamically determine when to initiate a garbage collection process on a data storage device. The write pattern of the host device is based on a number of I/O commands received from the host device and on a number of available memory blocks in the data storage device. If the write pattern of the host device indicates that fewer than a threshold number of memory blocks will be available after a predetermined number of additional I/O commands are received, the garbage collection process is initiated. An amount of valid data that is transferred from one memory location to another memory location during the garbage collection process is also dynamically determined. Thus, a garbage collection process may be tailored to a specific host device.
    Type: Grant
    Filed: July 28, 2023
    Date of Patent: March 25, 2025
    Assignee: Sandisk Technologies, Inc.
    Inventors: Anamika Choudhary, Disha Sharma
  • Patent number: 12253949
    Abstract: A data storage device implements a Zoned Namespace (ZNS) storage architecture. The data storage device delays the execution of write commands that are received out of sequence instead of rejecting the write commands. The write commands that are received out of sequence are reordered according to a logical block address (LBA) associated with each write command. The data storage device also checks for deadlock conditions that may arise due to the execution of the write commands being delayed and/or due to the write commands being reordered.
    Type: Grant
    Filed: July 26, 2023
    Date of Patent: March 18, 2025
    Assignee: Sandisk Technologies, Inc.
    Inventors: Rotem Sela, Amir Segev
  • Patent number: 12254208
    Abstract: An apparatus comprises a processing device configured to monitor a health status of storage devices that are part of a virtual disk and to identify a first subset of the storage devices that have a first health status and a second subset of the storage devices that have a second health status. The processing device is also configured, responsive to determining that there is sufficient available storage capacity on the second subset of the storage devices to copy data from used storage capacity on the first subset of the storage devices, to resize the virtual disk to a storage capacity determined as a function of storage capacities of the second subset of the storage devices allocated to the virtual disk and to copy data from the used storage capacity on the first subset of the storage devices to the available storage capacity on the second subset of the storage devices.
    Type: Grant
    Filed: February 6, 2023
    Date of Patent: March 18, 2025
    Assignee: Dell Products L.P.
    Inventors: Parminder Singh Sethi, Suren Kumar, Akshita Das
  • Patent number: 12236090
    Abstract: Methods, systems, and apparatuses include receiving a current free space value and a historic delta value. A delta value is calculated using the current free space value, a target free space value, and the historic delta value. A delta region is determined using the delta value. A new host rate is calculated using the determined delta region, the calculated delta value, and the historic delta value. The new host rate is sent to a host device causing the host device to change a current host rate to the new host rate.
    Type: Grant
    Filed: April 8, 2024
    Date of Patent: February 25, 2025
    Assignee: MICRON TECHNOLOGY, INC.
    Inventor: Donghua Zhou
  • Patent number: 12236109
    Abstract: A cloud computing system includes cloud orchestrator circuitry and fabric manager circuitry. The cloud orchestrator circuitry receives an input application and determines a task graph, a data graph, and a function popularity heap parameter for the input application. The task graph comprises an indication of function interdependency of functions of the input application, the data graph comprises an indication of data interdependency of the functions, and the function popularity heap parameter corresponds to a re-usability index for the functions. The fabric manager circuitry allocate a first programmable integrated circuit (IC) device to perform a first function of the input application based on the task graph, the data graph, and the function popularity heap parameter.
    Type: Grant
    Filed: May 24, 2023
    Date of Patent: February 25, 2025
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Pratik Mishra, Sergey Blagodurov, Atul Kumar Sujayendra Sandur