Patents Examined by Yaima Rigol
  • Patent number: 11048436
    Abstract: Techniques for block storage using a hybrid memory device are described. In at least some embodiments, a hybrid memory device includes a volatile memory portion, such as dynamic random access memory (DRAM). The hybrid memory device further includes non-volatile memory portion, such as flash memory. In at least some embodiments, the hybrid memory device can be embodied as a non-volatile dual in-line memory module, or NVDIMM. Techniques discussed herein employ various functionalities to enable the hybrid memory device to be exposed to various entities as an available block storage device.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: June 29, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Scott Chao-Chueh Lee, Robin A. Alexander, Lee E. Prewitt, Chiuchin Chen, Vladimir Sadovsky
  • Patent number: 11048429
    Abstract: Techniques are disclosed that allow for retroactively capturing a debug/trace-level log without experiencing the severe performance degradation that obtaining such a log would otherwise entail. Trace-level logging is performed by maintaining a buffer of log messages for application events. The buffer is allocated a memory having very fast write speeds, and writing such messages into the buffer has a negligible performance impact. Many of the messages written into the buffer may not be important or useful at the time they are written. However, when a failure occurs, the messages may be useful for figuring out what when wrong. Responsive to detecting a failure or other anomalous event, the buffer of messages is automatically written to a permanent storage. Although writing to the permanent storage may be slow, the performance degradation is only incurred when a failure occurs.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: June 29, 2021
    Assignee: Oracle International Corporation
    Inventor: Michael Patrick Rodgers
  • Patent number: 11042330
    Abstract: Provided is a method of storing data in a distributed environment including a plurality of storage devices, the method including: receiving a request to store the data; calculating a hash value by applying a hashing function to a value associated with the data; splitting the hash value into a plurality of weights, each weight corresponding to one of a plurality of chunks; selecting a chunk of the plurality of chunks based on the weight; and storing the data in a corresponding storage device, the corresponding storage device corresponding to the selected chunk.
    Type: Grant
    Filed: May 8, 2017
    Date of Patent: June 22, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Gunneswara Marripudi, Kumar Kanteti
  • Patent number: 11042477
    Abstract: The present disclosure is directed to a memory management method and to a memory management device arranged to execute memory allocation and/or memory deallocation by use of segregated free lists, which provide information on memory chunks, wherein the memory allocation and/or the memory deallocation are executed according to states of the memory chunks, and wherein the states of the memory chunks comprise: an used state indicating that a memory chunk, which is in used state, is in use, and is not available for allocation; a linked state indicating that a memory chunk, which is in linked state, is not used, is linked within a free list of the segregated free lists, and is available for allocation; a free state indicating that a memory chunk, which is in free state, is not used, is not linked within any of the segregated free lists, and is not available for allocation.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: June 22, 2021
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Aleksandr Aleksandrovich Simak, Peter Sergeevich Krinov, Xuecang Zhang
  • Patent number: 11010210
    Abstract: Embodiments of the present invention are directed to a computer-implemented method for controller address contention assumption. A non-limiting example computer-implemented method includes a shared controller receiving a fetch request for data from a first requesting agent, the receiving via at least one intermediary controller. The shared controller performs an address compare using a memory address of the data. In response to the memory address matching a memory address stored in the shared controller, the shared controller acknowledges the at least one intermediary controller's fetch request, wherein upon acknowledgement, the at least one intermediary controller resets. In response to release of the data by a second requesting agent, the shared controller transmits the data to the first requesting agent.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: May 18, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Robert J. Sonnelitter, III, Michael Fee, Craig R. Walters, Arthur O'Neill, Matthias Klein
  • Patent number: 11003493
    Abstract: Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: (a) obtaining grooming operation scheduling data specifying a schedule of grooming operations performed by at least first and second layers of the plurality of layers; (b) identifying, using data of grooming operation scheduling data, at least one gap in the execution of scheduled operations performed by the storage system; (c) moving an execution time of one or more grooming operation of the grooming operations into said at least one gap; and (d) repeating steps (a) to (c) to adapt to a changing usage pattern of said storage system.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: May 11, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Grzegorz Piotr Szczepanik, Lukasz Jakub Palus, Kushal Patel, Sarvesh Patel
  • Patent number: 11003576
    Abstract: A computer storage device having a host interface, a controller, non-volatile storage media, and firmware. The firmware instructs the controller to: generate mapping data defining mapping, from logical block addresses in namespaces configured on the non-volatile storage media, to logical block addresses in a capacity of the non-volatile storage media; maintain an active copy of the mapping data; generate cached copies of the mapping data from the active copy; generate a shadow copy from the active copy; implement changes in the shadow copy; after the changes are made in the shadow copy, activate the shadow copy and simultaneously deactivate the previously active copy; and update the cached copies according to the newly activated copy, as a response to the change in active copy identification.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: May 11, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Alex Frolikov
  • Patent number: 10996857
    Abstract: Disclosed are methods, systems, and processes to improve extent map performance A request for a data block is received. In response to detecting a cache miss, a temporary table is searched for the data block. If the data block is not found in the temporary table, a base table is searched for the data block.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: May 4, 2021
    Assignee: Veritas Technologies LLC
    Inventors: Yong Yang, Weibao Wu, Gallen Liu
  • Patent number: 10996858
    Abstract: Embodiments of the present disclosure relate to a method and device for migrating data. The method comprises identifying cold data in a primary storage system. The method further comprises, in response to determining that the cold data is in a non-compression state, obtaining the cold data from the primary storage system via a first interface, the first interface being configured for a user to access the primary storage system. The method further comprises obtaining, in response to determining the cold data is in a compression state, the cold data in the compression state from the primary storage system via a second interface that is different from the first interface. The method further comprises migrating the obtained cold data from the primary storage system to a secondary storage system.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: May 4, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Junping Zhao, Sen Zhang
  • Patent number: 10990281
    Abstract: A random-access memory (RAM) controller is connected with multiple memories. The random-access memory controller selectively boots at least one memory of the multiple memories based on booting-related information about the multiple memories.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: April 27, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seong-Heon Yu, Joungyeal Kim, Miyoung Woo
  • Patent number: 10990321
    Abstract: Commands in a command queue are received and scheduled. For each of the commands, scheduling includes determining an age of a command based on an entrance time of the command in the command queue. When the age of the command satisfies a first threshold, marking all other commands in the command queue as not issuable when the command is a deterministic command, and marking all other commands in the command queue as not issuable when the command is a non-deterministic command and the intermediate command queue is not empty. Scheduling the command further includes determining whether the command is a read command and marking the command as not issuable when the command is a non-deterministic read command and the intermediate command queue is empty.
    Type: Grant
    Filed: February 20, 2019
    Date of Patent: April 27, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Patrick A. La Fratta, Robert Walker
  • Patent number: 10990317
    Abstract: Memory devices and systems with automatic background precondition upon powerup, and associated methods, are disclosed herein. In one embodiment, a memory device includes a memory array having a plurality of memory cells at intersections of memory rows and memory columns. The memory device further includes sense amplifiers corresponding to the memory rows. When the memory device powers on, the memory device writes one or more memory cells of the plurality of memory cells to a random data state before executing an access command received from a user, a memory controller, or a host device of the memory device. In some embodiments, to write the one or more memory cells, the memory device fires multiple memory rows at the same time without powering corresponding sense amplifiers such that data stored on memory cells of the multiple memory rows is overwritten and corrupted.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: April 27, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Anthony D. Veches, Debra M. Bell, James S. Rehmeyer, Robert Bunnell, Nathaniel J. Meier
  • Patent number: 10956086
    Abstract: A memory controller circuit is disclosed which is coupleable to a first memory circuit, such as DRAM, and includes: a first memory control circuit to read from or write to the first memory circuit; a second memory circuit, such as SRAM; a second memory control circuit adapted to read from the second memory circuit in response to a read request when the requested data is stored in the second memory circuit, and otherwise to transfer the read request to the first memory control circuit; predetermined atomic operations circuitry; and programmable atomic operations circuitry adapted to perform at least one programmable atomic operation. The second memory control circuit also transfers a received programmable atomic operation request to the programmable atomic operations circuitry and sets a hazard bit for a cache line of the second memory circuit.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: March 23, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Tony M. Brewer
  • Patent number: 10942653
    Abstract: A method for performing refresh management in a memory device, the memory device and controller thereof are provided. The method may include: monitoring a temperature of the memory device, wherein the temperature is detected through a temperature sensor; updating a recorded highest temperature and a recorded lowest temperature according to said temperature; checking whether a difference between the recorded highest temperature and the recorded lowest temperature is greater than a predetermined temperature threshold; and when the difference is greater than the predetermined temperature threshold, triggering refresh of the memory device.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: March 9, 2021
    Assignee: Silicon Motion, Inc.
    Inventors: Jieh-Hsin Chien, Yi-Hua Pao
  • Patent number: 10942663
    Abstract: Techniques are provided for inlining data in inodes of a file system. In an example, data (e.g., a file) is to be written to storage. Where the data is small enough to fit in an inode, it can be written to a dynamic area of the inode. Where dynamic attributes of the inode conflict with storing the data, the dynamic attributes can be spilled to a metadata block. Where the inlined data becomes too large to be stored in the inode, it can be spilled to a data block, and a metadata tree can be written to the inode. Where data that was previously too large to inline is truncated so that now it can be written to the inode, the data is inlined in the inode from a data block.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: March 9, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Attilio Rao, Dmitri Chmelev
  • Patent number: 10936543
    Abstract: A data management device includes a cache for a data storage device and a processor. The cache includes cache devices that store a block set. The processor obtains a cache modification request that specifies a first block of the block set, updates a copy of a header of the block set in each of the cache devices based on the modification request, updates a copy of meta-data of the block set in each of the cache devices based on the cache modification request, and updates the first block in a first cache device of the cache devices based on the cache modification request.
    Type: Grant
    Filed: July 21, 2017
    Date of Patent: March 2, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Shuang Liang, Jayasekhar Konduru, Mahesh Kamat, Akshay Narayan Muramatti
  • Patent number: 10936319
    Abstract: In a decode stage of hardware processor pipeline, one particular instruction of a plurality of instructions is decoded. It is determined that the particular instruction requires a memory access. Responsive to such determination, it is predicted whether the memory access will result in a cache miss. The predicting in turn includes accessing one of a plurality of entries in a pattern history table stored as a hardware table in the decode stage. The accessing is based, at least in part, upon at least a most recent entry in a global history buffer. The pattern history table stores a plurality of predictions. The global history buffer stores actual results of previous memory accesses as one of cache hits and cache misses.
    Type: Grant
    Filed: June 16, 2018
    Date of Patent: March 2, 2021
    Assignee: International Business Machines Corporation
    Inventors: Vijayalakshmi Srinivasan, Brian R. Prasky
  • Patent number: 10936230
    Abstract: A computational memory for a computer. The memory includes a memory bank having a selected-row buffer and being configured to store records up to a number, K. The memory also includes an accumulator connected to the memory bank, the accumulator configured to store up to K records. The memory also includes an arithmetic and logic unit (ALU) connected to the accumulator and to the selected row buffer of the memory bank, the ALU having an indirect network of 2K ports for reading and writing records in the memory bank and the accumulator, and the ALU further physically configured to operate as a sorting network. The memory also includes a controller connected to the memory bank, the ALU, and the accumulator, the controller being hardware configured to direct operation of the ALU.
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: March 2, 2021
    Assignee: National Technology & Engineering Solutions of Sandia, LLC
    Inventor: Erik DeBenedictis
  • Patent number: 10922028
    Abstract: A data programming method, a memory storage device and a memory control circuit unit are provided. The method includes presetting a programming mode of a plurality of first type physical erasing units as a first programming mode, and presetting a programming mode of a plurality of second type physical erasing units as a second programming mode. The method also includes obtaining a change parameter according to usage parameters of the first type physical erasing units and the second type physical erasing units. The method further includes determining whether the change parameter matches a first change condition, and if the change parameter matches the first change condition, programming a write-data into the second type physical erasing unit by using the first programming mode.
    Type: Grant
    Filed: May 9, 2017
    Date of Patent: February 16, 2021
    Assignee: Hefei Core Storage Electronic Limited
    Inventors: Hao-Zhi Lee, Qi-Ao Zhu, Meng Xiao, Hui Xie
  • Patent number: 10915445
    Abstract: A method, computer readable medium, and system are disclosed for a distributed cache that provides multiple processing units with fast access to a portion of data, which is stored in local memory. The distributed cache is composed of multiple smaller caches, and each of the smaller caches is associated with at least one processing unit. In addition to a shared crossbar network through which data is transferred between processing units and the smaller caches, a dedicated connection is provided between two or more smaller caches that form a partner cache set. Transferring data through the dedicated connections reduces congestion on the shared crossbar network. Reducing congestion on the shared crossbar network increases the available bandwidth and allows the number of processing units to increase. A coherence protocol is defined for accessing data stored in the distributed cache and for transferring data between the smaller caches of a partner cache set.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: February 9, 2021
    Assignee: NVIDIA Corporation
    Inventors: Wishwesh Anil Gandhi, Tanmoy Mandal, Ravi Kiran Manyam, Supriya Shrihari Rao