Patents Examined by Alan Otto
  • Patent number: 12333183
    Abstract: A data storage device receives a speculative read command from a host identifying logical block addresses. The speculative read command is not required be to executed within a certain amount of time or even at all. The data storage device at least partially executes the speculative read command in response to determining that such execution will not reduce performance of the data storage device. At least partially executing the speculative read command causes data associated with at least some of the logical block addresses to be read from the non-volatile memory and stored in at least one buffer. Other embodiments are possible, and each of the embodiments can be used alone or together in combination.
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: June 17, 2025
    Assignee: Sandisk Technologies, Inc.
    Inventors: Abhinandan Venugopal, Amit Sharma, Anindita Chakrabarty
  • Patent number: 12248696
    Abstract: Example compute-in-memory (CIM) or processor-in-memory (PIM) techniques using repurposed or dedicated static random access memory (SRAM) rows of an SRAM sub-array to store look-up-table (LUT) entries for use in a multiply and accumulate (MAC) operation.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: March 11, 2025
    Assignee: Intel Corporation
    Inventors: Saurabh Jain, Srivatsa Rangachar Srinivasa, Akshay Krishna Ramanathan, Gurpreet Singh Kalsi, Kamlesh R. Pillai, Sreenivas Subramoney
  • Patent number: 12204800
    Abstract: Techniques are provided for implementing a garbage collection process and a prediction read ahead mechanism to prefetch keys into memory to improve the efficiency and speed of the garbage collection process. A log structured merge tree is used to store keys of key-value pairs within a key-value store. If a key is no longer referenced by any worker nodes of a distributed storage architecture, then the key can be freed to store other data. Accordingly, garbage collection is performed to identify and free unused keys. The speed and efficiency of garbage collection is improved by dynamically adjusting the amount and rate at which keys are prefetched from disk and cached into faster memory for processing by the garbage collection process.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: January 21, 2025
    Assignee: NetApp, Inc.
    Inventors: Anil Paul Thoppil, Wei Sun, Meera Odugoudar, Szu-Wen Kuo, Santhosh Selvaraj
  • Patent number: 12111760
    Abstract: According to one embodiment, a memory device includes a nonvolatile memory, address translation unit, generation unit, and reception unit. The nonvolatile memory includes erase unit areas. Each of the erase unit areas includes write unit areas. The address translation unit generates address translation information relating a logical address of write data written to the nonvolatile memory to a physical address indicative of a write position of the write data in the nonvolatile memory. The generation unit generates valid/invalid information indicating whether data written to the erase unit areas is valid data or invalid data. The reception unit receives deletion information including a logical address indicative of data to be deleted in the erase unit area.
    Type: Grant
    Filed: June 27, 2023
    Date of Patent: October 8, 2024
    Assignee: Kioxia Corporation
    Inventor: Shinichi Kanno
  • Patent number: 12086445
    Abstract: A data storage system stores a plurality of partitions for a volume and at least one parity partition for the volume. The parity partition includes erasure encoded data that enables any one of the partitions to be reconstructed using the erasure encoded data of the parity partition. Additionally, the data storage system is configured to generate parity data updates in response to modifications to the volume and store updated parity data in the parity partition, such that a current state of any of the partitions of the volume can be re-created in response to a loss of one of the partitions.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: September 10, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Kun Tang, Hon Ping Shea, Michael Scott Ryan
  • Patent number: 12079470
    Abstract: Disclosed embodiments relate to one or more techniques to control access by a requestor of a computing system to a shared memory resource. In one embodiment, a technique includes determining a number (N) of pending requests to be sent to the memory by the requestor, determining a number (M) of requests that the requestor is limited to sending based on an amount of buffering resources available, and comparing M to N. When N is both greater than zero and less than or equal to M, the requestor sends the N pending requests to the memory. When N is both greater than zero and greater than M, M is compared to a hysteresis value (R) and, when M is less than R, the requestor sends R of the N pending requests to the memory.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: September 3, 2024
    Assignee: Texas Instruments Incorporated
    Inventor: Matthew Pierson
  • Patent number: 12050774
    Abstract: A method for updates in a storage system is provided. The method includes writing identifiers, associated with data to be stored, to storage units of the storage system and writing trim records indicative of identifiers that are allowed to not exist in the storage system to the storage units. The method includes determining whether stored data corresponding to records of identifiers is valid based on the records of the identifiers and the trim records.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: July 30, 2024
    Assignee: PURE STORAGE, INC.
    Inventors: Brian Gold, John Hayes, Robert Lee
  • Patent number: 12045504
    Abstract: A memory sub-system, such as a solid state drive (SSD), having host interface configured to receive at least read commands and write commands from an external host system. The SSD has memory cells formed on at least one integrated circuit die, and a processing device configured to control executions of the read commands to retrieve data from the memory cells and executions the write commands to store data into the memory cells. During a burn-in operation of the memory sub-system in a manufacturing facility, the memory sub-system is configured to perform read/write operations for the generation of a proof of space plot. After the burn-in operation, the memory sub-system is provided as a product of the manufacturing facility; and the proof of space plot stored in the memory sub-system is provided as a by-product of the burn-in operation.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: July 23, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Joseph Harold Steinmetz, Luca Bert
  • Patent number: 12039181
    Abstract: Systems and methods for replicating data from storage. Snapshots are taken of the volumes in physical storage. The snapshot volumes are exposed to a virtual replication system. Using the snapshots, differential or changed data can be identified. The identified data is then replicated by the virtual replication system to a remove virtual replication system.
    Type: Grant
    Filed: October 18, 2019
    Date of Patent: July 16, 2024
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Jehuda Shemer, Arieh Don, Meir Pinhasov, Saar Cohen
  • Patent number: 12026370
    Abstract: A method for oversubscribing a host memory of a host running a virtual machine monitor (VMM), comprising, examining a virtual machine (VM) memory for a VM for metadata associated with the VM memory, the metadata maintained by a guest OS running on the VM, collecting the metadata for the VM memory, and managing the VM memory using the metadata for oversubscribing a host memory.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: July 2, 2024
    Assignee: Google LLC
    Inventors: Horacio Andres Lagar Cavilla, Adin Matthew Scannell, Timothy James Smith, Peter Feiner, Mushfiq Mahmood, David Richard Scannell, Jing Chih Su
  • Patent number: 11994974
    Abstract: Recording a trace of code execution using reference bits in a processor cache. A computing device comprises processing units and a shared cache. The shared cache includes a plurality of cache lines that is each associated with a plurality of accounting bits, which each includes a reference bits portion. Stored control logic uses these reference bits to log a second read operation by a second processing unit in reference to an already logged first read operation by a first processing unit.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: May 28, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Jordi Mola
  • Patent number: 11886361
    Abstract: A memory controller having an improved operating speed controls a memory device in response to a request from a host. The memory controller includes: a processor for driving firmware for controlling communication between the host and the memory device; a map data receiver for receiving map data including a plurality of mapping entries including physical block addresses, for operations to be performed on the memory device from the memory device under the control of the processor; and a map data controller for checking a mapping entry corresponding to the request, which are received from the map data receiver, snooping the detected mapping entry and outputting the detected mapping entry to the processor.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: January 30, 2024
    Assignee: SK hynix Inc.
    Inventors: Young Jo Kim, Sung Yeob Cho
  • Patent number: 11847052
    Abstract: A method of memory allocation in a host computer includes: allocating one or more regions of physical working memory for use by an application, the regions individually including contiguous physical memory segments, but the regions not necessarily being contiguous between themselves; generating a segment address table having at least as many entries as the total number of physical memory segments allocated to the application; populating entries of the segment address table sequentially and contiguously with the physical addresses of the physical memory segments across the or each region in order; presenting to the application a contiguous virtual addressable space having at least as many virtual memory segments as the total number of physical memory segments allocated to the application; and mapping from virtual memory addresses to physical memory addresses by reference to the segment address table.
    Type: Grant
    Filed: July 28, 2017
    Date of Patent: December 19, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Paul Bowen-Huggett
  • Patent number: 11836090
    Abstract: A method to store a data value onto a cache of a storage hierarchy. A range of a collection of values that resides on a first tier of the hierarchy is initialized. The range is partitioned into disjointed range partitions; a first subset of which is designated as cached; a second subset is designated as uncached. The collection is partitioned into a subset of uncached data and cached data and placed into respective partitions. The range partition to which the data value belongs (i.e. the target range partition) is identified as being cached. If the cache is full, all of the disjointed range partitions are deleted. A first new cached partition range that contains the data value is created; it excludes at least one value that had been cached. The remaining values are placed in uncached range partitions; contents of the cache are updated to reflect the new range partition.
    Type: Grant
    Filed: February 17, 2021
    Date of Patent: December 5, 2023
    Assignee: Kinaxis Inc.
    Inventor: Angela Lin
  • Patent number: 11762581
    Abstract: A method, device, and system for controlling a data read/write command in an NVMe over fabric architecture. In the method provided in the embodiments of the present disclosure, a data processing unit receives a control command sent by a control device, the data processing unit divides a storage space of a buffer unit into at least two storage spaces according to the control command sent by the control device, and establishes a correspondence between the at least two storage spaces and command queues, and after receiving a first data read/write command that is in a first command queue and that is sent by the control device, the data processing unit buffers, in a storage space that is of the buffer unit and that is corresponding to the first command queue, data to be transmitted according to the first data read/write command.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: September 19, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Victor Gissin, Xin Qiu, Pei Wu, Huichun Qu, Jinbin Zhang
  • Patent number: 11733932
    Abstract: Example implementations relate to managing data on a memory module. Data may be transferred between a first NVM and a second NVM on a memory module. The second NVM may have a higher memory capacity and a longer access latency than the first NVM. A mapping between a first address and a second address may be stored in an NVM on the memory module. The first address may refer to a location at which data is stored in the first NVM. The second address may refer to a location, in the second NVM, from which the data was copied.
    Type: Grant
    Filed: September 27, 2013
    Date of Patent: August 22, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Gregg B Lesartre, Andrew R Wheeler
  • Patent number: 11734185
    Abstract: A method to store a data value onto a cache of a storage hierarchy. A range of a collection of values that resides on a first tier of the hierarchy is initialized. The range is partitioned into disjointed range partitions; a first subset of which is designated as cached; a second subset is designated as uncached. The collection is partitioned into a subset of uncached data and cached data and placed into respective partitions. The range partition to which the data value belongs (i.e. the target range partition) is identified as being cached. If the cache is full, the range of the target range partition is reduced until either: the data value is excluded (if the data value is an end point of the partition range); or elements within the target range are evicted to make space for the data value.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: August 22, 2023
    Assignee: Kinaxis Inc.
    Inventor: Angela Lin
  • Patent number: 11726906
    Abstract: According to one embodiment, a memory device includes a nonvolatile memory, address translation unit, generation unit, and reception unit. The nonvolatile memory includes erase unit areas. Each of the erase unit areas includes write unit areas. The address translation unit generates address translation information relating a logical address of write data written to the nonvolatile memory to a physical address indicative of a write position of the write data in the nonvolatile memory. The generation unit generates valid/invalid information indicating whether data written to the erase unit areas is valid data or invalid data. The reception unit receives deletion information including a logical address indicative of data to be deleted in the erase unit area.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: August 15, 2023
    Assignee: Kioxia Corporation
    Inventor: Shinichi Kanno
  • Patent number: 11714758
    Abstract: A method to store a data value onto a cache of a storage hierarchy. A range of a collection of values that resides on a first tier of the hierarchy is initialized. The range is partitioned into disjointed range partitions; a first subset of which is designated as cached; a second subset is designated as uncached. The collection is partitioned into a subset of uncached data and cached data and placed into respective portions. The range partition to which the data value belongs (i.e. the target range partition) is identified as being cached. If the cache is full, the target range partition is divided into two partitions, the partition that excludes the data value is designated as uncached; the values therein are evicted. If the cache has space, the data value is copied onto the cache; otherwise the division/eviction are repeated until the cache has space.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: August 1, 2023
    Assignee: Kinaxis Inc.
    Inventor: Angela Lin
  • Patent number: 11681443
    Abstract: A data storage system includes a head node and mass storage devices. The head node is configured to store volume data and flush volume data to the mass storage devices. Additionally, the head node is configured to determine a quantity of data partitions and/or parity partitions to store for a chunk of volume data being flushed to the mass storage devices in order to satisfy a durability guarantee. For chunks of data for which complete copies are also stored in an additional data storage system, the head node is configured to reduce the quantity of data partitions and/or parity partitions stored such that required storage space is reduced while still ensuring that the durability guarantee is satisfied.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: June 20, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Sriram Venugopal, Kun Tang, Norbert Paul Kusters, Jianhua Fan