Patents Examined by Reginald G. Bragdon
-
Patent number: 12373345Abstract: Method and apparatus for managing a front-end cache formed of ferroelectric memory element (FME) cells. Prior to storage of writeback data associated with a pending write command from a client device, an intelligent cache manager circuit forwards a first status value indicative that sufficient capacity is available in the front-end cache for the writeback data. Non-requested speculative readback data previously transferred to the front-end cache from the main NVM memory store may be jettisoned to accommodate the writeback data. A second status value may be supplied to the client device if insufficient capacity is available to store the writeback data in the front-end cache, and a different, non-FME based cache may be used in such case. Mode select inputs can be supplied by the client device specify a particular quality of service level for the front-end cache, enabling selection of suitable writeback and speculative readback data processing strategies.Type: GrantFiled: November 6, 2023Date of Patent: July 29, 2025Assignee: SEAGATE TECHNOLOGY LLCInventors: Jon D. Trantham, Praveen Viraraghavan, John W. Dykes, Ian J. Gilbert, Sangita Shreedharan Kalarickal, Matthew J. Totin, Mohamad El-Batal, Darshana H. Mehta
-
Patent number: 12373124Abstract: A method, computer program product, and computing system for identifying a change in a capacity of a cloud-deployed storage system. A capacity ratio and a IOPS ratio are determined for the cloud-deployed storage system. A portion of a cloud-deployed storage device is modified based upon, at least in part, one or more of the capacity ratio and the IOPS ratio. The portion of the cloud-deployed storage device is mapped to a portion of a logical storage device.Type: GrantFiled: October 20, 2023Date of Patent: July 29, 2025Assignee: Dell Products L.P.Inventors: Dmitry Krivenok, Amitai Alkalay
-
Patent number: 12373342Abstract: An apparatus comprises at least one processing device configured to receive unmap requests for freeing up data previously written to one or more storage regions of at least one storage device of a distributed storage system, the unmap requests being received from two or more write cache instances of two or more storage nodes of the distributed storage system. The processing device is also configured to identify at least a subset of the unmap requests which are directed to a given storage region of the at least one storage device. The processing device is further configured to provide the subset of the unmap requests to at least one storage controller associated with the at least one storage device of the distributed storage system responsive to determining that at least one designated unmap condition has been met.Type: GrantFiled: September 21, 2023Date of Patent: July 29, 2025Assignee: Dell Products L.P.Inventors: Doron Tal, Yosef Shatsky, Ali Aiouaz, Amitai Alkalay
-
Patent number: 12360913Abstract: A virtual memory system for managing a virtual memory page table for a central processing unit and a system of encoding a virtual address (VA) is disclosed. The system includes a memory storing an encoded virtual address, a virtual page number having a settable bitfield that is set according to page size and offset, and a virtual memory. The virtual memory addressing circuitry is configured with a zero detector logic circuit and a virtual page number (VPN) multiplexer. The zero detector logic circuit is configured to read bits of the encoded virtual address and outputs the page size. The virtual page number (VPN) multiplexer is configured to select the virtual page number based on the page size and outputs an index to a page table.Type: GrantFiled: July 6, 2023Date of Patent: July 15, 2025Assignee: KING FAHD UNIVERSITY OF PETROLEUM AND MINERALSInventor: Muhamed Fawzi Mudawar
-
Patent number: 12360906Abstract: Provided is a method of data storage, the method including receiving, at a host of a key-value store, a request to access a data node stored on a storage device of the key-value store, locating an address corresponding to the data node in a host cache on the host, and determining that the data node is in a kernel cache on the storage device.Type: GrantFiled: June 13, 2022Date of Patent: July 15, 2025Assignees: Samsung Electronics Co., Ltd., Virginia Tech Intellectual Properties, INC.Inventors: Naga Sanjana Bikonda, Wookhee Kim, Madhava Krishnan Ramanathan, Changwoo Min, Vishwanath Maram
-
Patent number: 12360904Abstract: A heterogeneous computing system performs data synchronization. The heterogeneous computing system includes a system memory, a cluster, and a processing unit outside the cluster. The cluster includes a sync circuit, inner processors, and a snoop filter. The sync circuit is operative to receive a sync command indicating a sync address range. The sync command is issued by one of the processing unit and the inner processors. The sync circuit further determines whether addresses recorded in the snoop filter fall within the sync address range. In response to a determination that a recorded address falls within the sync address range, the sync circuit notifies a target one of the inner processors that owns a cache line having the recorded address to take a sync action on the cache line.Type: GrantFiled: September 25, 2023Date of Patent: July 15, 2025Assignee: MediaTek Inc.Inventors: Hsing-Chuang Liu, Yu-Shu Chen, Hong-Yi Chen
-
Patent number: 12360872Abstract: Methods, systems, and devices for performance benchmark for host performance booster are described. The memory system may receive a plurality of read commands from a host system. The memory system may detect a pattern of random physical addresses as part of the plurality of read commands and increase an amount of space in a cache of the memory system based on the detected pattern. In some cases, the amount of space may be used for mapping between logical block addresses and physical addresses. The memory system may determine, for a different plurality of read commands, whether a rate of cache hits for a portion of the mapping satisfies a threshold. In some cases, the memory system may determine whether to activate a host performance booster mode based on determining whether the rate of cache hits satisfies the threshold.Type: GrantFiled: March 16, 2021Date of Patent: July 15, 2025Assignee: Micron Technology, Inc.Inventors: Bin Zhao, Lingyun Wang
-
Communication of data relocation information by storage device to host to improve system performance
Patent number: 12360696Abstract: An apparatus comprises a controller comprising an interface comprising circuitry to communicate with a host computing device; and a relocation manager comprising circuitry, the relocation manager to provide, for the host computing device, an identification of a plurality of data blocks to be relocated within a non-volatile memory; and relocate at least a subset of the plurality of data blocks in accordance with a directive provided by the host computing device in response to the identification of the plurality of data blocks to be relocated.Type: GrantFiled: June 16, 2020Date of Patent: July 15, 2025Assignee: SK Hynix NAND Product Solutions Corp.Inventors: Bishwajit Dutta, Sanjeev N. Trika -
Patent number: 12353752Abstract: An embodiment of an electronic apparatus may include one or more substrates, and a controller coupled to the one or more substrates, the controller including circuitry to control access to NAND-based storage media that includes a plurality of NAND devices, maintain respective read disturb (RD) counters for each of two or more tracked units at respective granularities, maintain respective global RD counters for each of the two or more tracked units and, in response to a read request, increment one or more global RD counters that correspond to the read request, determine if a global RD counter for a tracked unit matches a random number associated with the tracked unit and, if so determined, increment a RD counter for the tracked unit that corresponds to the read request and generate a new random number for the tracked unit. Other embodiments are disclosed and claimed.Type: GrantFiled: September 22, 2021Date of Patent: July 8, 2025Inventors: Mohammad Nasim Imtiaz Khan, Yogesh B. Wakchaure, Eric Hoffman, Neal Mielke, Shirish Bahirat, Cole Uhlman, Ye Zhang, Anand Ramalingam
-
Patent number: 12353290Abstract: One or more embodiments of the invention improves upon the traditional method of performing a backup, by having a data protection manager or similar component of the system, determine, when a backup is requested, which backup agent should initially perform the backup. That backup agent may then determine among the other applicable backup agents, which backup types are needed and the order each backup agent performs the backup, when more than one backup agent is appropriate. This will allow for a more efficient backup, while avoiding collisions between two or more backup agents trying to simultaneously back up the same data.Type: GrantFiled: July 25, 2022Date of Patent: July 8, 2025Assignee: Delll Products L.P.Inventors: Sunil Yadav, Shelesh Chopra, Preeti Varma
-
Patent number: 12346614Abstract: Aspects of the present disclosure configure a system component, such as memory sub-system controller, to dynamically generate Redundant Array of Independent Nodes (RAIN) parity information for zone-based memory allocations. The RAIN parity information is generated for a given zone or set of zones on the basis of whether the given zone or set of zones satisfy a zone completeness criterion. The zone completeness criterion can represent a specified size such that when a given zone reaches the specified size, the parity information for that zone is generated.Type: GrantFiled: March 22, 2024Date of Patent: July 1, 2025Assignee: Micron Technology, Inc.Inventor: Luca Bert
-
Patent number: 12346258Abstract: Metadata page prefetch processing for incoming IO operations is provided to increase storage system performance by reducing the frequency of metadata page miss events during IO processing. When an IO is received at a storage system, the IO is placed in an IO queue to be scheduled for processing by an IO processing thread. A metadata page prefetch thread reads the LBA address of the IO and determines whether all of the metadata page(s) that will be needed by the IO processing thread are contained in IO thread metadata resources. In response to a determination that one or more of the required metadata pages are not contained in IO thread metadata resources, the metadata page prefetch thread instructs a MDP thread to move the required metadata page(s) from metadata storage to IO thread metadata resources. The IO processing thread then implements the IO operation using the prefetched metadata.Type: GrantFiled: January 1, 2024Date of Patent: July 1, 2025Assignee: Dell Products, L.P.Inventors: Ramesh Doddaiah, Sandeep Chandrashekhara, Mohammed Aamir Vt, Mohammed Asher
-
Patent number: 12339771Abstract: Workload distribution in a system including a non-volatile memory device is disclosed. A request is received including an address associated with a memory location of the non-volatile memory device. A hash value is calculated based on the address. A list of node values is searched, and one of the node values in the list is identified based on the hash value. A processor is identified based on the one of the node values, and the address is stored in association with the processor. The request is transmitted to the processor for accessing the memory location.Type: GrantFiled: November 16, 2021Date of Patent: June 24, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Jingpei Yang, Jing Yang, Rekha Pitchumani
-
Patent number: 12340125Abstract: Systems, apparatuses, and methods related to data reconstruction based on queue depth comparison are described. To avoid accessing the “congested” channel, a read command to access the “congested” channel can be executed by accessing the other relatively “idle” channels and utilize data read from the “idle” channels to reconstruct data corresponding to the read command.Type: GrantFiled: December 6, 2023Date of Patent: June 24, 2025Assignee: Micron Technology, Inc.Inventors: Patrick Estep, Sean S. Eilert, Ameen D. Akel
-
Patent number: 12332779Abstract: A data storage device and method for race-based data access in a multiple host memory buffer system are provided. In one embodiment, the data storage device stores data in a plurality of host memory buffers in the host instead of in just the host memory buffer usually associated with the data. To read the data, the data storage device sends read commands to all of the host memory buffers. That way, even if some of the host memory buffers are busy, the data can be returned from another one of the host memory buffers. In future reads in similar workloads, a read command can be sent to the host memory buffer that returned the data. Other embodiments are possible, and each of the embodiments can be used alone or together in combination.Type: GrantFiled: July 18, 2023Date of Patent: June 17, 2025Assignee: Sandisk Technologies, Inc.Inventors: Shay Benisty, Ariel Navon
-
Patent number: 12327025Abstract: A storage system having a plurality of control units that perform read control of data stored in a storage and write control of the data, the storage system comprising, each of the plurality of control units has a processor, a first memory connected to the processor and storing software for executing a process of read control and write control, a network interface for connecting to a control unit network that connects each of the plurality of control units, and a second memory connected to the network interface and storing control information of the data subject to read control and write control and cache data of the storage.Type: GrantFiled: March 8, 2023Date of Patent: June 10, 2025Assignee: Hitachi Vantara, Ltd.Inventors: Norio Chujo, Kentaro Shimada
-
Patent number: 12292978Abstract: A new approach is proposed to support SRAM less bootup of an electronic device. A portion of a cache unit of a processor is utilized as a SRAM to maintain data to be accessed via read and/or write operations for bootup of the electronic device. First, the portion of the cache unit is mapped to a region of a memory, which has not been initialized. The processor reads data from a non-modifiable storage to be used for the bootup process of the electronic device and writes the data into the portion of the cache unit serving as the SRAM. To prevent having to read or write to the uninitialized memory, any read operation to the memory region returns a specific value and any write operation to the memory region is dropped. The processor then accesses the data stored in the portion of the cache unit to bootup the electronic device.Type: GrantFiled: November 2, 2021Date of Patent: May 6, 2025Assignee: Marvell Asia Pte LtdInventors: Ramacharan Sundararaman, Avinash Sodani
-
Patent number: 12287734Abstract: In some examples, a computer identifies a plurality of memory servers accessible by the computer to perform remote access over a network of data stored by the plurality of memory servers, sends allocation requests to allocate memory segments to place interleaved data of the computer across the plurality of memory servers, and receives, at the computer in response to the allocation requests, metadata relating to the memory segments at the plurality of memory servers, the metadata comprising addresses of the memory segments at the plurality of memory servers. The computer uses the metadata to access, by the computer, the interleaved data at the plurality of memory servers, the interleaved data comprising blocks of data distributed across the memory segments.Type: GrantFiled: July 27, 2022Date of Patent: April 29, 2025Assignee: Hewlett Packard Enterprise Development LPInventors: Syed Ismail Faizan Barmawer, Gautham Bhat Kumbla, Mashood Abdulla Kodavanji, Clarete Riana Crasta, Sharad Singhal, Ramya Ahobala Rao
-
Patent number: 12282676Abstract: A cluster storage system takes snapshots that are consistent across all storage nodes. The storage system can nearly instantaneously promote a set of consistent snapshots to their respective base volumes to restore the base volumes to be the same as the snapshots. Given these two capabilities, users can restore the system to a recovery point of the user's choice, by turning off storage service I/O, promoting the snapshots constituting the recovery point, rebooting their servers, and resuming storage service I/O.Type: GrantFiled: March 2, 2023Date of Patent: April 22, 2025Assignee: Nvidia CorporationInventors: Siamak Nazari, David Dejong, Srinivasa Murthy, Shayan Askarian Namaghi, Roopesh Tamma
-
Patent number: 12277073Abstract: According to a quantization interconnect apparatus and an operating method thereof according to the exemplary embodiment of the present disclosure, in a quantization artificial neural network accelerator system, the quantization is performed in the interconnect bus according to a precision without separate processing of the CPU/GPU so that as compared with the quantization by a host processor and an accelerator according to a quantization method of the related art, a number of instructions is reduced to improve the performance/memory efficiency. Further, a computational burden of the host process is reduced to reduce the power and improve the performance.Type: GrantFiled: September 21, 2023Date of Patent: April 15, 2025Assignee: Kwangwoon University Industry-Academic Collaboration FoundationInventors: Young Ho Gong, Woo Hyuck Park, Ye Bin Kwon, Donggyu Sim