Patents Examined by Gurtej Bansal
  • Patent number: 11650924
    Abstract: Provided herein is a memory controller for controlling a memory device. The memory controller includes a workload detector configured to determine a change in workload based on reception of a changed request from a host or a change in clock received from an external device, a device performance controller configured to determine, if the workload is determined as changed, read performance based on a ratio of a size of data output to the host to a size of data requested from the host every preset period and configured to output a read-look-ahead (RLA) command to the memory device based on the determined read performance, a buffer memory configured to store data read from the memory device in response to the RLA command and a memory size controller configured to control a size of the buffer memory. The RLA command instructs to output data which is frequently requested from the host.
    Type: Grant
    Filed: July 27, 2021
    Date of Patent: May 16, 2023
    Assignee: SK hynix Inc.
    Inventors: Na Young Lee, Ku Ik Kwon, Kyeong Seok Kim, Byong Woo Ryu
  • Patent number: 11644986
    Abstract: A Data Storage Device (DSD) includes at least one Non-Volatile Memory (NVM) configured to store data and a Non-Volatile Cache (NVC). Write data is stored in a volatile memory in preparation for writing the write data in the at least one NVM. In response to a power loss of the DSD, at least a portion of the data stored in the volatile memory is transferred from the volatile memory to the NVC and one or more parameters are determined for deriving a margin representing an additional amount of data for transfer from the volatile memory to the NVC using a remaining power following a power loss. A size of the NVC is adjusted based at least in part on the derived margin.
    Type: Grant
    Filed: November 20, 2021
    Date of Patent: May 9, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventors: Bernd Lamberts, Shad H. Thorstenson, Andrew Larson, Mark Bergquist
  • Patent number: 11635899
    Abstract: Disclosed in some examples are memory devices which feature customizable Single Level Cell (SLC) and Multiple Level Cell (MLC) configurations. The SLC memory cells serve as a high-speed cache providing SLC level performance with the storage capacity of a memory device with MLC memory cells. The proportion of cells configured as MLC vs the proportion that are configured as SLC storage may be configurable, and in some examples, the proportion may change during usage based upon configurable rules based upon memory device metrics. In some examples, when the device activity is below an activity threshold, the memory device may skip the SLC cache and place the data directly into the MLC storage to reduce power consumption.
    Type: Grant
    Filed: January 11, 2022
    Date of Patent: April 25, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Kulachet Tanpairoj, Sebastien Andre Jean, Kishore Kumar Muchherla, Ashutosh Malshe, Jianmin Huang
  • Patent number: 11604737
    Abstract: A processing device determines a scope indicating at least a portion of the processing system and target data from atomic memory operation to be performed. Based on the scope, the processing device determines one or more hardware parameters for at least a portion of the processing system. The processing device then compares the hardware parameters to the scope and target data to determine one or more corrections. The processing device then provides the scope, target data, hardware parameters, and corrections to a plurality of hardware lookup tables. The hardware lookup tables are configured to receive the scope, target data, hardware parameters, and corrections as inputs and output values indicating one or more coherency actions and one or more orderings. The processing device then executes one or more of the indicated coherency actions and the atomic memory operation based on the indicated ordering.
    Type: Grant
    Filed: November 2, 2021
    Date of Patent: March 14, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Joseph L. Greathouse, Steven Tony Tye, Mark Fowler, Milind N. Nemlekar
  • Patent number: 11593259
    Abstract: The present disclosure includes apparatuses and methods for directed sanitization of memory. One example method comprises, responsive to receiving a sanitization command, performing a deterministic garbage collection operation on a memory, wherein performing the deterministic garbage collection operation results in physical erasure of all invalid data stored on the memory without losing valid data stored on the memory.
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: February 28, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Jeffrey L. McVay, Daniel J. Hubbard, Robert W. Strong, Michael B. Danielson, Jonathan Tanguy
  • Patent number: 11586543
    Abstract: A device connected to a host processor via a bus includes: an accelerator circuit configured to operate based on a message received from the host processor; and a controller configured to control an access to a memory connected to the device, wherein the controller is further configured to, in response to a read request received from the accelerator circuit, provide a first message requesting resolution of coherence to the host processor and prefetch first data from the memory.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: February 21, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jeongho Lee, Heehyun Nam, Jaeho Shin, Hyodeok Shin, Younggeon Yoo, Younho Jeon, Wonseb Jeong, Ipoom Jeong, Hyeokjun Choe
  • Patent number: 11586547
    Abstract: Methods, systems, and devices for an enhanced instruction caching scheme are described. A memory controller may include a first closely-coupled memory component that is associated with storing data and control information and a second closely-coupled memory component that is associated with storing control information. The memory controller may be configured to retrieve data from the first memory closely-coupled component and control information from a second closely-coupled memory component. Control information may be stored in the first closely-coupled memory component, and a memory controller may access the control information stored in the first closely-coupled memory component by transferring, from the first closely-coupled memory component, the control information into the second closely-coupled memory component.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: February 21, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Crescenzo Attanasio, Massimo Iaculo, Pasquale Cimmino, Nicola Cavaliere, Francesco Falanga
  • Patent number: 11579773
    Abstract: According to one embodiment, a memory system includes a non-volatile semiconductor memory, a block management unit, and a transcription unit. The semiconductor memory includes a plurality of blocks to which data can be written in both the first mode and the second mode. The block management unit manages a block that stores therein no valid data as a free block. When the number of free blocks managed by the block management unit is smaller than or equal to a predetermined threshold value, the transcription unit selects one or more used blocks that stores therein valid data as transcription source blocks and transcribes valid data stored in the transcription source blocks to free blocks in the second mode.
    Type: Grant
    Filed: December 7, 2021
    Date of Patent: February 14, 2023
    Assignee: Toshiba Memory Corporation
    Inventors: Hiroshi Yao, Shinichi Kanno, Kazuhiro Fukutomi
  • Patent number: 11580033
    Abstract: Provided herein may be a storage device and a method of operating the same. The method of operating a storage device including a replay protected memory block (RPMB) may include receiving a write request for the RPMB from an external host, selectively storing data in the RPMB based on an authentication operation, receiving a read request from the external host, and providing result data to the external host in response to the read request, wherein the read request includes a message indicating that a read command to be subsequently received from the external host is a command related to the result data.
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: February 14, 2023
    Assignee: SK hynix Inc.
    Inventor: Kwang Su Kim
  • Patent number: 11579980
    Abstract: System and techniques for performing snapshot and backup copy operations for individual virtual machines in a shared storage. The system can also include one or more shared physical computer storage devices communicatively coupled to the hypervisor to store the plurality of virtual machines. A plurality of storage volumes can be provided in the one or more shared physical computer storage devices where each storage volume uniquely corresponding to one of the virtual machines. The system can issue a command to a hypervisor to perform a snapshot or backup copy operation with a particular information management policy.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: February 14, 2023
    Assignee: Commvault Systems, Inc.
    Inventor: Ashwin Gautamchand Sancheti
  • Patent number: 11567663
    Abstract: A storage device includes a storage device communicably connected to a host; a nonvolatile memory configured to store calibration data of the host; and a calibration circuit configured to receive a descriptor from the host including the setting information and update the calibration data with the received setting information.
    Type: Grant
    Filed: April 2, 2021
    Date of Patent: January 31, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jeong Hur, Jae-Gyu Lee, Young-Moon Kim
  • Patent number: 11567780
    Abstract: The present application relates generally to a parallel processing device. The parallel processing device can include a plurality of processing elements, a memory subsystem, and an interconnect system. The memory subsystem can include a plurality of memory slices, at least one of which is associated with one of the plurality of processing elements and comprises a plurality of random access memory (RAM) tiles, each tile having individual read and write ports. The interconnect system is configured to couple the plurality of processing elements and the memory subsystem. The interconnect system includes a local interconnect and a global interconnect.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: January 31, 2023
    Assignee: Movidius Limited
    Inventors: David Moloney, Cormac Brick, Ovidiu Andrei Vesa, Brendan Barry
  • Patent number: 11556478
    Abstract: A cache system may include a cache to store a plurality of cache lines in a write-back mode; dirty cache line counter circuitry to store a count of dirty cache lines in the cache, increment the count when a new dirty cache line is added to the cache, and decrement the count when an old dirty cache line is written-back from the cache; dirty cache line write-back tracking circuitry to store an ordering of the dirty cache lines in a write-back order; mapping circuitry to map the dirty lines into the ordering; and controller circuity to use the mapping circuity to identify an evicted dirty cache line in the ordering and remove the evicted dirty cache line from the ordering.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: January 17, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Frank R. Dropps
  • Patent number: 11537531
    Abstract: The disclosed embodiments relate to a computer system with a cache memory that supports tagless addressing. During operation, the system receives a request to perform a memory access, wherein the request includes a virtual address. In response to the request, the system performs an address-translation operation, which translates the virtual address into both a physical address and a cache address. Next, the system uses the physical address to access one or more levels of physically addressed cache memory, wherein accessing a given level of physically addressed cache memory involves performing a tag-checking operation based on the physical address. If the access to the one or more levels of physically addressed cache memory fails to hit on a cache line for the memory access, the system uses the cache address to directly index a cache memory, wherein directly indexing the cache memory does not involve performing a tag-checking operation and eliminates the tag storage overhead.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: December 27, 2022
    Assignee: Rambus Inc.
    Inventors: Hongzhong Zheng, Trung A. Diep
  • Patent number: 11526442
    Abstract: Methods, systems, and devices for metadata management for a cache are described. An interface controller may include a first array and a second array that store metadata for a cache memory. The interface controller may receive an activate command associated with a row of the cache memory. In response to the activate command, the interface controller may communicate storage information for the row of the volatile memory from a first array to a first register. The interface controller may receive an access command associated with the row of the cache memory. In response to the access command and based on the storage information in the first register, the interface controller may communicate validity information for the row from a second array to the first register or dirty information for the row from the second array to a second register.
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: December 13, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Chinnakrishnan Ballapuram, Taeksang Song, Saira Samar Malik
  • Patent number: 11526283
    Abstract: An apparatus in an illustrative embodiment comprises at least one processing device comprising a processor and a memory, with the processor coupled to the memory. The at least one processing device is configured to receive in a storage system, from a host device, information that identifies (i) a particular virtual machine implemented by the host device and (ii) a key specific to the virtual machine, to utilize at least a portion of the received information to obtain in the storage system the key specific to the virtual machine from a key management server external to the storage system, to store the obtained key in the storage system in association with one or more parts of the received information, and to utilize the obtained key to process input-output operations that are received in the storage system from the host device and that are identified as being associated with the virtual machine.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: December 13, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Sanjib Mallick, Amit Pundalik Anchi
  • Patent number: 11526435
    Abstract: A storage system and method for automatic data phasing are disclosed. In one embodiment, a storage system is configured to receive, from a host, data to be written in the memory and an indication of an expected lifespan of the data; and determine whether to perform a garbage collection operation on the data based on the expected lifespan of the data. Other embodiments are provided.
    Type: Grant
    Filed: February 4, 2020
    Date of Patent: December 13, 2022
    Assignee: Western Digital Technologies, Inc.
    Inventor: Ramanathan Muthiah
  • Patent number: 11520706
    Abstract: Data caching may include storing data associated with DRAM transaction requests in data storage structures organized in a manner corresponding to the DRAM bank, bank group and rank organization. Data may be selected for transfer to the DRAM by selecting among the data storage structures.
    Type: Grant
    Filed: April 29, 2021
    Date of Patent: December 6, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Alain Artieri, Rakesh Kumar Gupta, Subbarao Palacharla, Kedar Bhole, Laurent Rene Moll, Carlo Spitale, Sparsh Singhai, Shyamkumar Thoziyoor, Gopi Tummala, Christophe Avoinne, Samir Ginde, Syed Minhaj Hassan, Jean-Jacques Lecler, Luigi Vinci
  • Patent number: 11513798
    Abstract: A system and method are provided for simplifying load acquire and store release semantics that are used in reduced instruction set computing (RISC). Translating the semantics into micro-operations, or low-level instructions used to implement complex machine instructions, can avoid having to implement complicated new memory operations. Using one or more data memory barrier operations in conjunction with load and store operations can provide sufficient ordering as a data memory barrier ensures that prior instructions are performed and completed before subsequent instructions are executed.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: November 29, 2022
    Assignee: Ampere Computing LLC
    Inventors: Matthew Ashcraft, Christopher Nelson
  • Patent number: 11500781
    Abstract: A cache memory includes cache lines to store information. The stored information is associated with physical addresses that include first, second, and third distinct portions. The cache lines are indexed by the second portions of respective physical addresses associated with the stored information. The cache memory also includes one or more tables, each of which includes respective table entries that are indexed by the first portions of the respective physical addresses. The respective table entries in each of the one or more tables are to store indications of the second portions of respective physical addresses associated with the stored information.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: November 15, 2022
    Assignee: RAMBUS INC.
    Inventors: Trung Diep, Hongzhong Zheng