Patents Examined by Jae U Yu
  • Patent number: 12366957
    Abstract: A host system includes an interface for coupling the host system to a data storage device. The host system also includes one or more processors, and memory storing one or more programs for execution by the one or more processors. The one or more programs include instructions for: determining if a retrim is needed for the data storage device; and in accordance with a determination that the retrim is needed: identifying a time to initiate a new trim on the data storage device; and causing the new trim on the data storage device at the time identified.
    Type: Grant
    Filed: July 12, 2023
    Date of Patent: July 22, 2025
    Assignee: Sandisk Technologies, Inc.
    Inventors: Eran Erez, Joseph R. Meza, Dylan B. Fairchild
  • Patent number: 12360900
    Abstract: A processor includes a processing core configured to process each of a plurality of requests by accessing a corresponding one of a first memory and a second memory, a latency monitor configured to generate first latency information and second latency information, the first latency information comprising a first access latency to the first memory, and the second latency information comprising a second access latency to the second memory, a plurality of cache ways divided into a first partition and a second partition, and a decision engine configured to allocate each of the plurality of cache ways to one of the first partition and the second partition, based on the first latency information and the second latency information.
    Type: Grant
    Filed: February 28, 2024
    Date of Patent: July 15, 2025
    Assignees: SAMSUNG ELECTRONICS CO., LTD., Daegu Gyeongbuk Institute Of Science And Technology
    Inventors: Jin Jung, Daehoon Kim, Hwanjun Lee, Jonggeon Lee, Jinin So
  • Patent number: 12346252
    Abstract: A first NIC monitors a key-value cache associated with an LLM executed by a compute node that includes the first NIC and an accelerator. The key-value cache is stored in a memory associated with the accelerator. Responsive to detecting that the key-value cache is updated by the accelerator, the first NIC transfers a copy of the key-value cache update to a remote storage node. The key-value cache is deleted from the memory after the query is inferred. Responsive to receiving a follow-up user query, the first NIC determines a storage location on the remote storage node that stores the key-value cache corresponding to the user query and sends a KV-cache-transfer request to a second NIC on the remote storage node, the KV-cache-transfer request specifying the storage location, thereby facilitating the second NIC to transfer the key-value cache corresponding to the user query from the specified storage location to the memory.
    Type: Grant
    Filed: April 3, 2024
    Date of Patent: July 1, 2025
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Aditya Dhakal, Pedro H. R. Bruel, Gourav Rattihalli, Sai Rahul Chalamalasetti, Dejan S. Milojicic
  • Patent number: 12332793
    Abstract: A microprocessor includes a cache memory, a store queue, and a load/store unit. Each entry of the store queue holds store data associated with a store instruction. The load/store unit, during execution of a load instruction, makes a determination that an entry of the store queue holds store data that includes some but not all bytes of load data requested by the load instruction, cancels execution of the load instruction in response to the determination, and writes to an entry of a structure from which the load instruction is subsequently issuable for re-execution an identifier of a store instruction that is older in program order than the load instruction and an indication that the load instruction is not eligible to re-execute until the identified older store instruction updates the cache memory with store data.
    Type: Grant
    Filed: May 20, 2024
    Date of Patent: June 17, 2025
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 12333177
    Abstract: Devices and techniques are disclosed wherein a data storage device (DSD) generates ranking information corresponding to user data stored at a non-volatile memory of the DSD. The ranking information can be used by the DSD to form a frequently used files list, which can be read by a host system upon initialization with the host system and displayed to a user at the host system.
    Type: Grant
    Filed: July 28, 2023
    Date of Patent: June 17, 2025
    Assignee: Sandisk Technologies, Inc.
    Inventors: Rohith Radhakrishnan, Alvin Gomez
  • Patent number: 12333159
    Abstract: Implementations described herein relate to abrupt power loss management. In some implementations, a memory device may receive a peripheral component interconnect express reset (PERST) signal. The memory device may perform a write protect operation based on receiving the PERST signal. The memory device may initiate a reduced power consumption state of the memory device based on a completion of the write protect operation.
    Type: Grant
    Filed: November 16, 2023
    Date of Patent: June 17, 2025
    Assignee: Micron Technology, Inc.
    Inventor: Marco Redaelli
  • Patent number: 12332785
    Abstract: Methods, systems, and computer program products for implementing a precomputed availability cache that is a database of precomputed availabilities for availability searches. An availability request is received. Segmentation data that includes one or more segments is obtained. An associated segmentation entry within a precomputed availability cache is determined for each of the one or more segments. A validity check of the availability data that indicates that at least one of the availabilities for the segments is invalid is performed. An availability for the at least one of the availabilities for the segments is invalid is determined by accessing an inventory database replication and processing an availability computation for the at least one of the availabilities. The cache is updated with the determined availability for the at least one of the availabilities. Availability results that include availabilities for each of the one or more segments are provided.
    Type: Grant
    Filed: May 4, 2022
    Date of Patent: June 17, 2025
    Assignee: Amadeus S.A.S.
    Inventors: Cyril Colombel, Francesco Fronte, Giovanni Giorgi, Camille Brugel, Pierre Feuillet
  • Patent number: 12321285
    Abstract: A caching system including a first sub-cache, a second sub-cache, coupled in parallel with the first sub-cache, for storing cache data evicted from the first sub-cache and write-memory commands that are not cached in the first sub-cache, and a cache controller configured to receive two or more cache commands, determine a conflict exists between the received two or more cache commands, determine a conflict resolution between the received two or more cache commands, and sending the two or more cache commands to the first sub-cache and the second sub-cache.
    Type: Grant
    Filed: May 9, 2024
    Date of Patent: June 3, 2025
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Hippleheuser
  • Patent number: 12314611
    Abstract: Some embodiments provide a method for, at a network interface controller (NIC) of a computer, accessing data in a network. From the computer, the method receives a request to access data stored at a logical memory address. The method translates the logical memory address into a memory address of a particular network device storing the requested data. The method sends a data message to the particular network device to retrieve the requested data.
    Type: Grant
    Filed: February 2, 2024
    Date of Patent: May 27, 2025
    Assignee: VMWare LLC
    Inventors: Alex Markuze, Shay Vargaftik, Igor Golikov, Yaniv Ben-Itzhak, Avishay Yanai
  • Patent number: 12293113
    Abstract: A method for storing and reading cached data and a device are provided. The method for storing and reading cached data includes: in response to receiving to-be-cached data, segmenting the to-be-cached data sequentially into at least two pieces of first fragmented data; writing the first fragmented data sequentially into first storage particles of at least two storage blocks in a time division multiplexing manner, and ensuring that the first fragmented data written into the respective first storage particles are different from each other. The fragmented data are stored and read in the time division multiplexing manner, and the fragmented data corresponding to a complete data are stored in different storage blocks, so a plurality of data can be stored and read in a complete data storage and read process, thereby reducing read and write time overhead during the execution of a large number of buffered data storage and read tasks.
    Type: Grant
    Filed: February 22, 2024
    Date of Patent: May 6, 2025
    Assignee: YUSUR Technology Co., Ltd.
    Inventors: Jian Jin, Shuanglin Zhang
  • Patent number: 12287729
    Abstract: A processing device comprises processors, a first memory shared by the processors, and a cache comprising a second memory comprising a plurality of memory units, each of the plurality of memory units in the second memory being associated with a respective one of a plurality of request identifiers. The cache receives a memory read request including a request identifier and a memory address from at least one of the processors, identifies an allocated memory address identifier for the memory address, accesses the first memory to read data of the memory address, obtains one or more request identifiers which requested data of the memory address from the second memory based on the allocated memory address identifier, and transmitting the data of the memory address to one or more processors which requested data of the memory address based on the one or more request identifiers.
    Type: Grant
    Filed: March 7, 2024
    Date of Patent: April 29, 2025
    Assignee: Rebellions Inc.
    Inventors: Sungpill Choi, Jae-Sung Yoon
  • Patent number: 12277062
    Abstract: In asynchronous remote replication, write IOs are accumulated in capture cycles and sent to a remote storage system in transmit cycles. In order to cause metadata cache hits at the remote storage system, write IO data and associated metadata hints such as logical block addresses being updated are sent in successive cycles. The metadata hints, which are received at the remote storage system before the corresponding write IO data, are used to prefetch metadata associated with the logical block addresses being updated to replicate the write IO.
    Type: Grant
    Filed: December 7, 2023
    Date of Patent: April 15, 2025
    Assignee: Dell Products L.P.
    Inventors: Sandeep Chandrashekhara, Ramesh Doddaiah, Mohammed Asher, Aamir Mohammed
  • Patent number: 12277059
    Abstract: The present application discloses a method and apparatus for reducing a mirror data transmission amount by a dual layer cache, and a device and a medium. The method includes: after receiving an input/output (IO) request, writing, by a first node, the IO request into a first upper-layer cache space; writing, by the first node, first cached data corresponding to the IO request into a first lower-layer cache space according to the IO request, and generating, by the first node, first index information for the first cached data; writing mirror data of the IO request into a second upper-layer cache space of a second node; and writing mirror data of the first index information into a second lower-layer cache space of the second node.
    Type: Grant
    Filed: December 20, 2024
    Date of Patent: April 15, 2025
    Assignee: Suzhou MetaBrain Intelligent Technology Co., Ltd.
    Inventors: Xiangfei Kong, Yonggang Wang
  • Patent number: 12271309
    Abstract: Systems and techniques are disclosed for relative age tracking for entries in a buffer. For example, some techniques may include pre-computing age matrix entries of an age matrix corresponding to invalid entries of a data buffer based on a validity indication (e.g., a valid bit mask), wherein the validity indication identifies valid entries in the data buffer and the age matrix tracks relative ages of the entries in the data buffer; responsive to data being received for storage in the data buffer, selecting an entry corresponding to an index value in the data buffer from among a set of invalid entries of the data buffer; storing the data in the entry corresponding to the index value; and updating the validity indication to indicate that the entry corresponding to the index value is valid.
    Type: Grant
    Filed: December 1, 2023
    Date of Patent: April 8, 2025
    Assignee: SiFive, Inc.
    Inventor: Wesley Waylon Terpstra
  • Patent number: 12271588
    Abstract: The disclosed device includes a memory-semantic fabric comprising memory components accessible by multiple processors and a controller for the memory-semantic fabric. The controller receives, from multiple processes, memory requests for a memory-semantic fabric. The controller also identifies, within the processes, a source process for each of the memory requests and prioritizes forwarding the memory requests to the memory-semantic fabric based on the source processes. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: March 30, 2023
    Date of Patent: April 8, 2025
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Atul Kumar Sujayendra Sandur, Sergey Blagodurov, Nathaniel Morris
  • Patent number: 12259788
    Abstract: Techniques for UNDO and REDO operations in a computer-user interface are disclosed. The techniques enables users to configure entities for UNDO and REDO operations. The techniques also enable users to roll back individual entity to an immediate previous state in one UNDO operation and subsequently to the other previous states. Other entities are not affected by the UNDO operations to the entity.
    Type: Grant
    Filed: May 2, 2024
    Date of Patent: March 25, 2025
    Assignee: Oracle International Corporation
    Inventors: Satish Chandra Oruganti, Ganesh Kumar Gupta, Michael Patrick Rodgers
  • Patent number: 12260127
    Abstract: Techniques for storage and processing for distributed file systems are disclosed. In the illustrative embodiment, padding is placed between data elements in a file to be stored on a distributed file system. The file is to be split into several objects in order to be stored in the distributed file system, and the padding is used to prevent a data element from being split across two different objects. The objects are stored on data nodes, which analyze the objects to determine which data elements are present in the object as well at the location of those objects. The location of the objects is saved on the data storage device, and those locations can be used to perform queries on the data elements in the object on the data storage device itself. Such an approach can reduce transfer of data elements from data storage to local memory of the data node.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: March 25, 2025
    Assignee: Intel Corporation
    Inventors: John S. Keys, Daniel R. McLeran, Ian F. Adams, Michael P. Mesnier, Nilesh N. Shah
  • Patent number: 12248710
    Abstract: A plurality of computing devices are communicatively coupled to each other via a network, and each of the plurality of computing devices is operably coupled to one or more of a plurality of storage devices. The computing devices may use local caches in a coherent manner when accessing the plurality of storage devices.
    Type: Grant
    Filed: March 8, 2024
    Date of Patent: March 11, 2025
    Assignee: Weka.IO Ltd.
    Inventors: Maor Ben Dayan, Omri Palmon, Liran Zvibel, Kanael Arditti, Artemy Voikhansky, Alex Goltman
  • Patent number: 12236997
    Abstract: A semiconductor memory device includes a memory cell array including a plurality of memory cell rows, a row hammer management circuit and a control logic circuit. The row hammer management circuit stores counted values in count cells of each of the plurality of memory cell rows as count data based on an active command applied to the control logic circuit at a first time point, and performs an internal read-update-write operation to read the count data from the count cells of a target memory cell row from among the plurality of memory cell rows, to update the count data that was read to obtain updated count data, and to write the updated count data in the count cells of the target memory cell row in response to a precharge command applied at a second time point after a first command that is applied to the control logic circuit.
    Type: Grant
    Filed: July 24, 2023
    Date of Patent: February 25, 2025
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Kiheung Kim, Taeyoung Oh, Jongcheol Kim, Kyungho Lee, Hyongryol Hwang
  • Patent number: 12235762
    Abstract: Disclosed are a data access method and apparatus, a device, and a computer-readable storage medium, the method including: creating a cache pool matching memory capacity of an accelerator card on a host side, the cache pool containing cache blocks divided according to a set capacity unit; under a condition that acquiring a read instruction of target data, calling from the cache pool a target cache block matching capacity of the target data; storing the target data into the target cache block, recording meta information about the target cache block, and setting write protection for the target cache block; and executing a data access operation according to state information corresponding to the cache blocks, and adjusting the state information about the cache blocks after executing the data access operation.
    Type: Grant
    Filed: December 27, 2022
    Date of Patent: February 25, 2025
    Assignee: IEIT SYSTEMS CO., LTD.
    Inventor: Ke Liu