Patents Examined by William E Baughman
  • Patent number: 11977770
    Abstract: Memory devices, memory systems, and methods of operating memory devices and systems are disclosed in which a memory device can asynchronously indicate to a connected host that information in a mode register has been changed, obviating the need for repeated polling of the information and thereby reducing both command/address bus and data bus bandwidth consumption. In one embodiment, a memory device comprises a memory; a mode register storing information corresponding to the memory; and circuitry configured to, in response to the information in the mode register being modified by the memory device, generate a notification to a connected host device.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: May 7, 2024
    Inventor: Frank F. Ross
  • Patent number: 11977758
    Abstract: Methods, systems, and devices for assigning blocks of memory systems are described. Some memory systems may be configured to initiate an operation to characterize a plurality of blocks of a memory system; identify a first quantity of complete blocks of the plurality of blocks and a second quantity of reduced blocks of the plurality of blocks based at least in part on initiating the operation; determine, for a block of the second quantity of reduced blocks, whether a quantity of planes available for use to store the information in the block satisfies a threshold; and assign the block as a special function block configured to store data associated with a function of the memory system based at least in part on determining that the quantity of planes available for use to store the information in the block of the second quantity of reduced blocks satisfies the threshold.
    Type: Grant
    Filed: August 12, 2022
    Date of Patent: May 7, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Deping He, Caixia Yang
  • Patent number: 11977768
    Abstract: Methods, systems, and devices for write buffer extensions for storage interface controllers are described. Apparatuses and methods are presented in which a buffer may be used to temporarily store data from an application if the memory device is in an INACTIVE power mode. This may allow the memory device to remain asleep. The buffer may be positioned on the host device so that the power mode of the memory device may not affect it. That way, data may be stored in the buffer without waking up the memory device. If the memory device is in an ACTIVE power mode, the data that has been temporarily stored in the buffer may be sent to the memory device for storage. During read operations, if the requested data is stored in the buffer, it may be used instead of data in the memory device.
    Type: Grant
    Filed: June 20, 2022
    Date of Patent: May 7, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Sharath Chandra Ambula, Sushil Kumar, Venkata Kiran Kumar Matturi
  • Patent number: 11971784
    Abstract: To perform Recovery Point Objective (RPO) driven backup scheduling, the illustrative data storage management system is enhanced in several dimensions. Illustrative enhancements include: streamlining the user interface to take in fewer parameters; backup job scheduling is largely automated based on several factors, and includes automatic backup level conversion for legacy systems; backup job priorities are dynamically adjusted to re-submit failed data objects with an “aggressive” schedule in time to meet the RPO; only failed items are resubmitted for failed backup jobs.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: April 30, 2024
    Assignee: Commvault Systems, Inc.
    Inventors: Bhavyan Bharatkumar Mehta, Anand Vibhor, Amey Vijaykumar Karandikar, Gokul Pattabiraman, Hemant Mishra
  • Patent number: 11960721
    Abstract: A method for dynamically storing keys and values includes receiving a request for storing one or more keys in a key value Solid State drive (KV-SSD). The method further includes performing a storage operation for storing each key of the one or more keys in a node of a data structure of the KV-SSD. The storage operation includes allocating a first region in the node for storing the key, such that a size of the first region is equal to a size of the key. The storage operation further includes allocating a second region in the node for storing key metadata associated with the key, such that the second region is of a predetermined size. The storage operation further includes storing the key in the first region and the key metadata in the second region of the node.
    Type: Grant
    Filed: July 7, 2022
    Date of Patent: April 16, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Srikanth Tumkur Shivanand, Kapil Garg, Paul Justin K, Sarath Chandra Reddy, Sri Gobicca Kms
  • Patent number: 11960727
    Abstract: A system and corresponding method perform large memory transaction (LMT) stores. The system comprises a processor associated with a data-processing width and a processor accelerator. The processor accelerator performs a LMT store of a data set to a coprocessor in response to an instruction from the processor targeting the coprocessor. The data set corresponds to the instruction. The LMT store includes storing data from the data set, atomically, to the coprocessor based on a LMT line (LMTLINE). The LMTLINE is wider than the data-processing width. The processor accelerator sends, to the processor, a response to the instruction. The response is based on completion of the LMT store of the data set in its entirety. The processor accelerator enables the processor to perform useful work in parallel with the LMT store, thereby improving processing performance of the processor.
    Type: Grant
    Filed: September 30, 2022
    Date of Patent: April 16, 2024
    Assignee: Marvell Asia Pte Ltd
    Inventors: Aadeetya Shreedhar, Jason D. Zebchuk, Wilson P. Snyder, II, Albert Ma, Joseph Featherston
  • Patent number: 11954030
    Abstract: Aspects of the disclosure relate to a dynamic caching platform. The dynamic caching platform may train a machine learning model based on historical complexity score information. The dynamic caching platform may receive information streams from a client metaverse device and a metaverse host system. The dynamic caching platform may generate a complexity score based on the interaction information streams using the machine learning model. The dynamic caching platform may compare the complexity score to complexity thresholds. Based on the comparison, the dynamic caching platform may identify caching rules. The dynamic caching platform may cache interaction information based on the caching rules. The dynamic caching platform may update the complexity score using the machine learning model. The dynamic caching platform may update the caching rules based on the updated complexity score. The dynamic caching platform may cache interaction information based on the updated caching rules.
    Type: Grant
    Filed: November 21, 2022
    Date of Patent: April 9, 2024
    Assignee: Bank of America Corporation
    Inventors: Shailendra Singh, Vinod Maghnani, Ashish Kumar Dwivedi
  • Patent number: 11947823
    Abstract: A data storage and retrieval system for a computer memory including a memory slice formed of segments and adapted to contain one or more documents and a checkpoint adapted to persist the memory slice. The checkpoint includes a document vector containing a document pointer corresponding to a document. The document pointer including a segment identifier identifying a logical segment of the memory slice and an offset value defining a relative memory location of the first document within the identified segment. There are checkpoint memory blocks, each storing a copy of a corresponding segment of the memory slice. The segment identifier of the document pointer identifies a checkpoint memory block and the offset value of the document pointer defines a relative location of the document within the checkpoint memory block.
    Type: Grant
    Filed: May 23, 2022
    Date of Patent: April 2, 2024
    Assignee: SAP SE
    Inventors: Christian Bensberg, Steffen Geissinger
  • Patent number: 11941275
    Abstract: Certain embodiments described herein relate to an improved disk usage growth prediction system. In some embodiments, one or more components in an information management system can determine usage status data of a given storage device, perform a validation check on the usage status data using multiple prediction models, compare validation results of the multiple prediction models to identify the best performing prediction model, generate a disk usage growth prediction using the identified prediction model, and adjust the available space of the storage device according to the disk usage growth prediction.
    Type: Grant
    Filed: July 6, 2022
    Date of Patent: March 26, 2024
    Assignee: Commvault Systems, Inc.
    Inventors: Bheemesh R. Dwarampudi, Vibhor Mishra, Pavan Kumar Reddy Bedadala
  • Patent number: 11922030
    Abstract: According to one embodiment, a memory device includes a first nonvolatile memory die, a second nonvolatile memory die, a controller, and a first temperature sensor and a second temperature sensor incorporated respectively in the first nonvolatile memory die and the second nonvolatile memory die. The controller reads temperatures measured by the first and second temperature sensors, from the first and second nonvolatile memory dies. When at least one of the temperatures read from the first and second nonvolatile memory dies is equal to or higher than a threshold temperature, the controller reduces a frequency of issue of commands to the first and second nonvolatile memory dies or a seed of access to the first and second nonvolatile memory dies.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: March 5, 2024
    Assignee: Kioxia Corporation
    Inventors: Atsushi Kondo, Ryo Yonezawa
  • Patent number: 11922058
    Abstract: Embodiments of a three-dimensional (3D) memory device and a method of operating the 3D memory device are provided. The 3D memory device includes an array of 3D NAND memory cells, an array of static random-access memory (SRAM) cells, and a peripheral circuit. The array of SRAM cells and the peripheral circuit arranged at one side are bonded with the array of 3D NAND memory cells at another side to form a chip. Data is received from a host through the peripheral circuit, buffered in the array of SRAM cells, and transmitted from the array of SRAM cells to the array of 3D NAND memory cells. The data is programmed into the array of 3D NAND memory cells.
    Type: Grant
    Filed: December 1, 2021
    Date of Patent: March 5, 2024
    Assignee: YANGTZE MEMORY TECHNOLOGIES CO., LTD.
    Inventors: Yue Ping Li, Wei Jun Wan, Chun Yuan Hou
  • Patent number: 11922040
    Abstract: Embodiments of the present disclosure relate to a memory system and an operating method of the memory system. According to embodiments of the present disclosure, a memory system may divide and manage the plurality of memory dies into a plurality of memory die groups, may set a first super memory block including at least one of memory blocks included in a first memory die group, and a second super memory block including at least one of memory blocks included in a second memory die group, may determine whether to set an extended super memory block in which all or part of the first super memory block and all or part of the second super memory block are merged, and may write a write data to the extended super memory block in an interleaving manner when writing the write data requested by a host.
    Type: Grant
    Filed: January 11, 2023
    Date of Patent: March 5, 2024
    Assignee: SK hynix Inc.
    Inventor: Youn Won Park
  • Patent number: 11907113
    Abstract: According to one embodiment, a magnetic disk device comprises magnetic disks, heads, and a controller. The controller does not allocate logical addresses to sectors of a first area to be specified in such a manner as to correspond to a defect existing in a predetermined recording area, the first area being within the predetermined recording area constituted of a plurality of cylinders adjacent to each other in the magnetic disks, and uniquely allocates logical addresses to sectors of a second area other than the first area. The controller makes allocation of logical addresses to the sectors of the second area different from each other according to the number of defects existing in the predetermined recording area.
    Type: Grant
    Filed: August 4, 2022
    Date of Patent: February 20, 2024
    Assignees: KABUSHIKI KAISHA TOSHIBA, TOSHIBA ELECTRONIC DEVICES & STORAGE CORPORATION
    Inventor: Takeshi Shibasaki
  • Patent number: 11899962
    Abstract: According to one embodiment, an information processing apparatus includes a nonvolatile memory and a CPU. The CPU stores first data in the nonvolatile memory, performs a first transmission of a write request associated with the first data to the memory system, and stores management data including information equivalent to the write request in the nonvolatile memory. In response to receiving a first response to the write request transmitted in the first transmission, the CPU adds, to the management data, information indicating that the first response has been received. The CPU deletes the first data and the management data in response to receiving a second response to the write request transmitted in the first transmission after receiving the first response.
    Type: Grant
    Filed: March 3, 2022
    Date of Patent: February 13, 2024
    Assignee: Kioxia Corporation
    Inventors: Naoki Esaka, Koichi Nagai, Toyohide Isshi
  • Patent number: 11892912
    Abstract: Methods and systems for backing up and restoring sets of electronic files using sets of pseudo-virtual disks are described. The sets of electronic files may be sourced from or be stored using one or more different data sources including one or more real machines and/or one or more virtual machines. A first snapshot of the sets of electronic files may be aggregated from the different data sources and stored using a first pseudo-virtual disk. A second snapshot of the sets of electronic files may be aggregated from the different data sources subsequent to the generation of the first pseudo-virtual disk and stored using the first pseudo-virtual disk or a second pseudo-virtual disk different from the first pseudo-virtual disk.
    Type: Grant
    Filed: May 2, 2022
    Date of Patent: February 6, 2024
    Assignee: Rubrik, Inc.
    Inventor: Soham Mazumdar
  • Patent number: 11886260
    Abstract: Aspects of a storage device are thermal management of a non-volatile storage device are provided. In various embodiments, a storage device includes corresponding memory locations on two or more dies. Corresponding memory locations on each die form an addressable group. A controller in thermal communication with each of the dies may detect an excess temperature on one of the dies while performing sequential host writes. Upon such detection, the controller may disable all writes to the detected die while continuing to perform writes to the memory locations of the other dies without throttling the other dies. The controller may then reactivate writes to the detected die when the temperature drops below a threshold.
    Type: Grant
    Filed: May 19, 2022
    Date of Patent: January 30, 2024
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Sridhar Prudviraj Gunda, Kiran Kumar Eemani, Praveen Kumar Boda
  • Patent number: 11880308
    Abstract: A cache subsystem is disclosed. The cache subsystem includes a cache configured to store information in cache lines arranged in a plurality of ways. A requestor circuit generates a request to access a particular cache line in the cache. A prediction circuit is configured to generate a prediction of which of the ways includes the particular cache line. A comparison circuit verifies the prediction by comparing a particular address tag associated with the particular cache line to a cache tag corresponding to a predicted one of the ways. Responsive to determining that the prediction was correct, a confirmation indication is stored indicating the correct prediction. For a subsequent request for the particular cache line, the cache is configured to forego a verification of the prediction that the particular cache line is included in the one of the ways based on the confirmation indication.
    Type: Grant
    Filed: September 20, 2022
    Date of Patent: January 23, 2024
    Assignee: Apple Inc.
    Inventors: Ronald P. Hall, Mary D. Brown, Balaji Kadambi, Mahesh K. Reddy
  • Patent number: 11874773
    Abstract: Systems, methods, and apparatuses relating to a dual spatial pattern prefetcher are described.
    Type: Grant
    Filed: December 28, 2019
    Date of Patent: January 16, 2024
    Assignee: Intel Corporation
    Inventors: Rahul Bera, Anant Vithal Nori, Sreenivas Subramoney
  • Patent number: 11868621
    Abstract: A data storage system can employ a read destructive memory configured with multiple levels. A non-volatile memory unit can be programmed with a first logical state in response to a first write voltage of a first hysteresis loop by a write controller prior to being programmed to a second logical state in response to a second write voltage of the first hysteresis loop, as directed by the write controller. The first and second logical states may be present concurrently in the non-volatile memory unit and subsequently read concurrently as the first logical state and the second logical state.
    Type: Grant
    Filed: June 20, 2022
    Date of Patent: January 9, 2024
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventors: Jon D. Trantham, Praveen Viraraghavan, John W. Dykes, Ian J. Gilbert, Sangita Shreedharan Kalarickal, Matthew J. Totin, Mohamad El-Batal, Darshana H. Mehta
  • Patent number: 11860788
    Abstract: Data can be prefetched in a distributed storage system. For example, a computing device can receive a message with metadata associated with at least one request for an input/output operation from a message queue. The computing device can determine, based on the message from the message queue, an additional IO operation predicted to be requested by a client subsequent to the at least one request for the IO operation. The computing device can send a notification to a storage node of a plurality of storage nodes associated with the additional IO operation for prefetching data of the additional IO operation prior to the client requesting the additional IO operation.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: January 2, 2024
    Assignee: Red Hat, Inc.
    Inventors: Gabriel Zvi BenHanokh, Yehoshua Salomon