Patents Examined by Gurtej Bansal
  • Patent number: 11977754
    Abstract: In accordance with one implementation, a method for adaptive in-field recalibration includes detecting a potential environmental disturbance for a first storage node in a mass storage system based on an indicator external to the first storage node, and initiating a recalibration of an operational parameter of the first storage node responsive to the detection.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: May 7, 2024
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventors: Stephen S. Huh, Christopher R. Fulkerson
  • Patent number: 11954366
    Abstract: Providing constant fixed commands to memory dies within a data storage device may result in hardware and firmware overheads impacting the performance at a flash interface module (FIM) because the FIM has to handle both the constant fixed commands and the overheads associated with the constant fixed commands. To avoid the impact on performance at the FIM, multiple fixed commands may be combined into individual multi-commands that may be provided to the memory dies. The use of multi-commands reduces hardware and firmware overheads at the FIM relative to the constant fixed commands, which improves performance of the data storage device because the saturation of the FIM is decreased.
    Type: Grant
    Filed: May 26, 2022
    Date of Patent: April 9, 2024
    Assignee: Western Digital Technologies, Inc.
    Inventors: Dinesh Kumar Agarwal, Vijay Sivasankaran, Mikhail Palityka
  • Patent number: 11947834
    Abstract: A method to provide network storage services to a remote host system, including: generating, from packets received from the remote host system, first control messages and first data messages; buffering, in a random-access memory of a memory sub-system, the first control messages for a local host system to fetch the first control messages, process the first control messages, and generate second control messages; sending the first data messages to a storage device of the memory sub-system without the first data messages being buffered in the random-access memory; communicating the second control messages generated by the local host system to the storage device of the memory sub-system; and processing, within the storage device, the second control messages and the first data messages to provide the network storage services.
    Type: Grant
    Filed: July 15, 2022
    Date of Patent: April 2, 2024
    Assignee: Micron Technology, Inc.
    Inventor: Luca Bert
  • Patent number: 11941252
    Abstract: Provided are methods, apparatuses, systems, and computer-readable storage media for reducing an open time of a solid-state drive (SSD). In an embodiment, a method includes dividing a logical-to-physical (L2P) address mapping table of the SSD into a plurality of segments. The method further includes assigning one journal buffer of a plurality of journal buffers to each segment of the plurality of segments. The method further includes recreating, during a power on sequence of the SSD, a portion of the plurality of segments by replaying a first subset of the plurality of journal buffers. The method further includes sending, to a host device, a device-ready signal upon successful recreation of the portion of the plurality of segments. The method further includes recreating, in a background mode, a remaining portion of the plurality of segments by replaying a second subset of the plurality of journal buffers.
    Type: Grant
    Filed: August 9, 2022
    Date of Patent: March 26, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Tushar Tukaram Patil, Anantha Sharma, Sharath Kumar Kodase, Suman Prakash Balakrishnan
  • Patent number: 11940913
    Abstract: A method for signal request caching is described. Signal requests are received at a signal processor from a plurality of computing devices. The received signal requests are routed to a signal data store. An ingestion rate of the received signal requests is monitored by the signal processor. When the ingestion rate meets a signal request rate threshold of the signal data store, overflow signal requests of the received signal requests are automatically routed to an intermediate cache instead of the signal data store. The overflow signal requests within the intermediate cache are aggregated into one or more signal packages, each of the one or more signal packages containing a plurality of overflow signal requests. The one or more signal packages are stored at the signal data store.
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: March 26, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Bo Liu, Ke Wang, Ahmed Mohamed
  • Patent number: 11934659
    Abstract: A processing device illustratively includes a processor coupled to a memory, and is configured to initiate a background copy process in a host device to copy data from a first storage system to a second storage system. The processing device receives input-output (IO) processing pressure feedback from at least one of the first and second storage systems, and adjusts one or more characteristics of the background copy process based at least in part on the received IO processing pressure feedback. The processing device may comprise, for example, host level mirroring (HLM) logic configured to control execution of the background copy process for one or more logical storage devices. Adjusting one or more characteristics of the background copy process based at least in part on the received IO processing pressure feedback may comprise, for example, reducing a rate of the background copy process responsive to the received IO processing pressure feedback.
    Type: Grant
    Filed: September 28, 2022
    Date of Patent: March 19, 2024
    Assignee: Dell Products L.P.
    Inventors: Sanjib Mallick, Vinay G. Rao, Arieh Don
  • Patent number: 11921642
    Abstract: A cache memory includes cache lines to store information. The stored information is associated with physical addresses that include first, second, and third distinct portions. The cache lines are indexed by the second portions of respective physical addresses associated with the stored information. The cache memory also includes one or more tables, each of which includes respective table entries that are indexed by the first portions of the respective physical addresses. The respective table entries in each of the one or more tables are to store indications of the second portions of respective physical addresses associated with the stored information.
    Type: Grant
    Filed: November 14, 2022
    Date of Patent: March 5, 2024
    Assignee: RAMBUS INC.
    Inventors: Trung Diep, Hongzhong Zheng
  • Patent number: 11921626
    Abstract: A processing-in-memory includes: a memory; a register configured to store offset information; and an internal processor configured to: receive an instruction and a reference physical address of the memory from a memory controller, determine an offset physical address of the memory based on the offset information, determine a target physical address of the memory based on the reference physical address and the offset physical address, and perform the instruction by accessing the target physical address.
    Type: Grant
    Filed: August 5, 2022
    Date of Patent: March 5, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hosang Yoon, Seungwon Lee
  • Patent number: 11921639
    Abstract: A method for caching data, and a host device and a storage system that caches data. The method includes determining a first file in a storage device as a first predetermined type of file; reallocating a logical address of a predetermined logical address region to the first file; and updating a first logical address to physical address (L2P) table, corresponding to the predetermined logical address region, in a cache of the host device. The updated first L2P table includes a mapping relationship between the logical address reallocated for the first file and a physical address of the first file.
    Type: Grant
    Filed: June 28, 2022
    Date of Patent: March 5, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Heng Zhang, Yuanyuan Ye, Huimei Xiong, Yunchang Liang
  • Patent number: 11914900
    Abstract: A storage system receives an instruction to cancel an in-progress read/write command. The storage system allows data associated with the command to continue to be processed by a data path in the storage system even though the command was cancelled. However, before the data is actually transferred out of the data path, a controller determines that the command was cancelled and prevents the data from being transferred out, while internally indicating that the transfer was complete. This provides a faster cancellation process than methods that attempt to stop the data from being processed by the data path.
    Type: Grant
    Filed: May 31, 2022
    Date of Patent: February 27, 2024
    Assignee: Western Digital Technologies, Inc.
    Inventors: Amir Segev, Shay Benisty
  • Patent number: 11914526
    Abstract: Provided herein may be a storage device and a method of operating the same. The method of operating a storage device including a replay protected memory block (RPMB) may include receiving a write request for the RPMB from an external host, selectively storing data in the RPMB based on an authentication operation, receiving a read request from the external host, and providing result data to the external host in response to the read request, wherein the read request includes a message indicating that a read command to be subsequently received from the external host is a command related to the result data.
    Type: Grant
    Filed: December 19, 2022
    Date of Patent: February 27, 2024
    Assignee: SK hynix Inc.
    Inventor: Kwang Su Kim
  • Patent number: 11907124
    Abstract: Aspects include using a shadow copy of a level 1 (L1) cache in a cache hierarchy. A method includes maintaining the shadow copy of the L1 cache in the cache hierarchy. The maintaining includes updating the shadow copy of the L1 cache with memory content changes to the L1 cache a number of pipeline cycles after the L1 cache is updated with the memory content changes.
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: February 20, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Yair Fried, Aaron Tsai, Eyal Naor, Christian Jacobi, Timothy Bronson, Chung-Lung K. Shum
  • Patent number: 11907562
    Abstract: In one embodiment, a method comprises maintaining state information regarding a data replication status for a storage object of the storage node of a primary storage cluster with the storage object being replicated to a replicated storage object of a secondary storage cluster, temporarily disallowing input/output (I/O) operations when the storage object has a connection loss or failure. The method further includes initiating a resynchronization between the storage object and the replicated storage object including initiating asynchronous persistent inflight tracking and replay of any missing I/O operations that are missing from one of a first Op log of the primary storage cluster and a second Op log of the secondary storage cluster, and allowing new I/O operations to be handled with the storage object of the primary storage cluster without waiting for completion of the asynchronous persistent inflight tracking and replay at the secondary storage cluster.
    Type: Grant
    Filed: July 11, 2022
    Date of Patent: February 20, 2024
    Assignee: NetApp, Inc.
    Inventors: Krishna Murthy Chandraiah Setty Narasingarayanapeta, Akhil Kaushik
  • Patent number: 11899583
    Abstract: Various implementations described herein are directed to a device with a multi-layered logic structure with multiple layers including a first layer and a second layer arranged vertically in a stacked configuration. The device may have a first cache memory with first interconnect logic disposed in the first layer. The device may have a second cache memory with second interconnect logic disposed in the second layer, wherein the second interconnect logic in the second layer is linked to the first interconnect logic in the first layer.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: February 13, 2024
    Assignee: Arm Limited
    Inventors: Joshua Randall, Alejandro Rico Carro, Dam Sunwoo, Saurabh Pijuskumar Sinha, Jamshed Jalal
  • Patent number: 11899586
    Abstract: A memory address may be received at an m-way set-associative cache, which may store a set of cache entries. The memory address may be partitioned into a tag, an index, and an offset. The m-way set-associative cache may include a first structure to store a first subset of tag bits corresponding to the set of cache entries and a second structure to store a second subset of tag bits corresponding to the set of cache entries. The index may be used to select a first set of entries from the first structure. A first portion of tag bits of the memory address may be matched with the first set of entries. A cache status may be determined based on matching the first portion of tag bits of the memory address with the first set of entries.
    Type: Grant
    Filed: September 28, 2022
    Date of Patent: February 13, 2024
    Assignee: Synopsys, Inc.
    Inventor: Karthik Thucanakkenpalayam Sundararajan
  • Patent number: 11899614
    Abstract: Embodiments described herein provide techniques to facilitate instruction-based control of memory attributes. One embodiment provides a graphics processor comprising a processing resource, a memory device, a cache coupled with the processing resources and the memory, and circuitry to process a memory access message received from the processing resource. The memory access message enables access to data of the memory device. To process the memory access message, the circuitry is configured to determine one or more cache attributes that indicate whether the data should be read from or stored the cache. The cache attributes may be provided by the memory access message or stored in state data associated with the data to be accessed by the access message.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: February 13, 2024
    Assignee: Intel Corporation
    Inventors: Joydeep Ray, Altug Koker, Varghese George, Mike Macpherson, Aravindh Anantaraman, Abhishek R. Appu, Elmoustapha Ould-Ahmed-Vall, Nicolas Galoppo von Borries, Ben J. Ashbaugh
  • Patent number: 11893238
    Abstract: According to one embodiment, a memory system includes a non-volatile semiconductor memory, a block management unit, and a transcription unit. The semiconductor memory includes a plurality of blocks to which data can be written in both the first mode and the second mode. The block management unit manages a block that stores therein no valid data as a free block. When the number of free blocks managed by the block management unit is smaller than or equal to a predetermined threshold value, the transcription unit selects one or more used blocks that stores therein valid data as transcription source blocks and transcribes valid data stored in the transcription source blocks to free blocks in the second mode.
    Type: Grant
    Filed: January 17, 2023
    Date of Patent: February 6, 2024
    Assignee: KIOXIA CORPORATION
    Inventors: Hiroshi Yao, Shinichi Kanno, Kazuhiro Fukutomi
  • Patent number: 11893268
    Abstract: A method includes calculating, by a data storage device processor, at least one access trajectory from a first disc surface location to at least one second disc surface location at which at least one primary data access operation is to be carried out. The method also includes determining, by the data storage device controller, whether an opportunity to commence at least one secondary data access operation exists along or proximate to the at least one access trajectory from the first disc surface location to the at least one second disc surface location.
    Type: Grant
    Filed: January 19, 2022
    Date of Patent: February 6, 2024
    Assignee: Seagate Technology LLC
    Inventors: Brian T. Edgar, Mark A. Gaertner
  • Patent number: 11886354
    Abstract: Techniques are disclosed relating to cache thrash detection. In some embodiments, cache controller circuitry is configured to monitor and track performance metrics across multiple levels of a cache hierarchy, detect cache thrashing based on one or more performance metrics, and modify a cache insertion policy to mitigate cache thrashing. Disclosed techniques may advantageously detect and reduce or avoid cache thrashing, which may increase processor performance, decrease power consumption for a given workload, or both, relative to traditional techniques.
    Type: Grant
    Filed: May 20, 2022
    Date of Patent: January 30, 2024
    Assignee: Apple Inc.
    Inventors: Anwar Q. Rohillah, Tyler J. Huberty
  • Patent number: 11886347
    Abstract: Computing architecture comprises an off-chip memory, an on-chip cache unit, a prefetching unit, a global scheduler, a transmitting unit, a pre-recombination network, a post-recombination network, a main computing array, a write-back cache unit, a data dependence controller and an auxiliary computing array. The architecture reads data tiles into an on-chip cache in a prefetching mode, and performs computing according to the data tiles; in the computing process of the tiles, a tile exchange network is adopted to recombine a data structure, and a data dependence module is arranged to process a data dependence relationship possibly existing between different tiles. According to the computing architecture, the data utilization rate can be increased, the data processing flexibility is improved, and therefore Cache Miss is reduced, and the memory bandwidth pressure is reduced.
    Type: Grant
    Filed: July 13, 2022
    Date of Patent: January 30, 2024
    Assignee: Xi'an Jiaotong University
    Inventors: Tian Xia, Pengju Ren, Haoran Zhao, Zehua Li, Wenzhe Zhao, Nanning Zheng