Patents Examined by Nathan Sadler
-
Patent number: 12373351Abstract: A distributed metadata cache for a distributed object store includes a plurality of cache entries, an active-cache-entry set and an unreferenced-cache-entry set. Each cache entry includes information relating to whether at least one input/output (IO) thread is referencing the cache entry and information relating to whether the cache entry is no longer referenced by at least one IO thread. Each cache entry in the active-cache-entry set includes information that indicates that at least one IO thread is actively referencing the cache entry. Each cache entry in the unreferenced-cache-entry set is eligible for eviction from the distributed metadata cache by including information that indicates that the cache entry is no longer actively referenced by an IO thread.Type: GrantFiled: August 18, 2023Date of Patent: July 29, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Vijaya Kumar Jakkula, Siva Ramineni, Venkata Bhanu Prakash Gollapudi
-
Patent number: 12367142Abstract: A decompression apparatus includes a cache memory and a decoder. The decoder is to receive a compressed input data stream including literals and matches. Each literal represents a data value, and each match represents a respective sequence of literals by a respective offset pointing to a respective past occurrence of the sequence of literals. The decoder is to decompress the input data stream by replacing each match with the corresponding past occurrence, so as to produce an output data stream. In replacing a given match with the corresponding past occurrence, the decoder is to (i) when the offset indicates that the past occurrence is cached in the cache memory, retrieve the past occurrence from the cache memory, and (ii) when the offset indicates that the past occurrence is not contained in the cache memory, fetch the past occurrence from an external memory.Type: GrantFiled: January 23, 2024Date of Patent: July 22, 2025Assignee: Mellanox Technologies, LtdInventors: Hillel Chapman, Saleh Bohsas
-
Patent number: 12360702Abstract: A solid state drive (SSD) includes an NAND memory and an SSD controller. The SSD controller includes an interface coupled to a host machine, a nonvolatile memory controller coupled to the interface, and a processor coupled to the nonvolatile memory controller. The SSD controller is configured to: receive, via the interface, a write command from the host machine; process, by the nonvolatile memory controller, the write command; transmit, from the nonvolatile memory controller to the processor, a system message; process, by the processor according to Zoned Namespaces (ZNS) protocol, the system message; obtain, by the nonvolatile memory controller via the interface, host data for storage from the host machine; and write the host data to the NAND memory based on a result of processing the system message. Processing the system message by the processor and obtaining the host data by the nonvolatile memory controller are executed in parallel.Type: GrantFiled: February 24, 2023Date of Patent: July 15, 2025Assignee: T-Head (Shanghai) Semiconductor Co., Ltd.Inventors: Yuming Xu, Jiu Heng, Fei Xue, Wentao Wu, Jifeng Wang, Jiajing Jin, Xiang Gao
-
Patent number: 12360677Abstract: Aspects of a storage device are provided for handling detection and operations associated with an erase block type of the block. The storage device includes one or more non-volatile memories each including a block, and one or more controllers operable to cause the storage device to perform erase type detection and associated operations for single blocks or metablocks. For instance, the controller(s) may erase the block prior to a power loss event, perform at least one read of the block following the power loss event, identify the erase block type of the block in response to the at least one read, and program the block based on the identified erase block type without performing a subsequent erase prior to the program. The controller(s) may also perform metablock operations associated with the identified erase block type. Thus, unnecessary erase operations during recovery from an ungraceful shutdown (UGSD) may be mitigated.Type: GrantFiled: October 11, 2023Date of Patent: July 15, 2025Assignee: Sandisk Technologies, Inc.Inventors: YunKyu Lee, SangYun Jung, Minyoung Kim, SeungBeom Seo, MinWoo Lee
-
Patent number: 12360903Abstract: The described technology provides a method including receiving a request for allocating an incoming cacheline to one of a plurality of SFT entries in a snoop filter (SFT), performing a tag lookup function for a tag of the incoming cacheline in the SFT, in response to determining that the incoming cacheline is not part of an existing sector of any of the plurality of SFT entries, finding one or more candidate SFT entries, wherein the candidate SFT entries can be converted to an aggregated entry, selecting one of the candidate SFT entries, and allocating the incoming cacheline to the selected SFT entry.Type: GrantFiled: May 31, 2023Date of Patent: July 15, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Dimitrios Kaseridis, Mukund Ramakrishna
-
Patent number: 12353326Abstract: A control method, for controlling a reading operation of a memory device, includes the following steps. A toggle signal is provided to the memory device, and the toggle signal has a toggle frequency. A reading operation of a page of the memory device is performed according to the toggle signal, wherein the page includes a plurality of chunks. The toggle frequency is set as a target toggle frequency, and the reading operation of a first chunk of the page is performed according to the target toggle frequency, so as to receive a data signal of the memory device. After the reading operation of the first chunk is completed, the toggle frequency is selectively adjusted to perform the reading operation of a second chunk after the first chunk according to a stable state of the data signal and the data strobe signal.Type: GrantFiled: April 19, 2024Date of Patent: July 8, 2025Assignee: MACRONIX INTERNATIONAL CO., LTD.Inventors: Shih-Chou Juan, Shun-Li Cheng, Hung-Yi Chiang
-
Patent number: 12353765Abstract: A data storage device comprising a data port, configured to transceive data via a wired communication channel, a control port, configured to transceive data via a peer-to-peer wireless communication channel, a non-volatile storage medium, and a controller. In response to receiving, from a user device, via the control port, a command to enable control channel access, the controller performs an unlocking process, and, in response to completing the unlocking process, transitions from a locked state to a control channel access state. In response to being in the control channel access state, and in response to receiving, from the user device, via the control port, a write command, the controller stores write data in the storage medium, and, in response to receiving, from a host computer, via the data port, a command to access the storage medium, the controller transmits, to the host computer, a locked state indication.Type: GrantFiled: August 10, 2023Date of Patent: July 8, 2025Assignee: Sandisk Technologies, Inc.Inventors: Ramanathan Muthiah, Sundararajan Rajagopal
-
Patent number: 12353741Abstract: Various implementations described herein relate to systems and methods for managing superblocks, including a non-volatile storage including a superblock and a controller configured to notify a host of a size of the superblock to a host, determine a stream that aligns with the superblock, write data corresponding to the stream to the superblock, and determine that writing the data correspond to the stream has completed.Type: GrantFiled: April 18, 2024Date of Patent: July 8, 2025Assignee: KIOXIA CORPORATIONInventors: Steven Wells, Neil Buxton, Nigel Horspool, Mohinder Saluja, Paul Suhler
-
Patent number: 12299286Abstract: An apparatus to facilitate in-place memory copy during remote data transfer in a heterogeneous compute environment is disclosed. The apparatus includes a processor to receive data via a network interface card (NIC) of a hardware accelerator device; identify a destination address of memory of the hardware accelerator device to write the data; determine that access control bits of the destination address in page tables maintained by a memory management unit (MMU) indicate that memory pages of the destination address are both registered and free; write the data to the memory pages of the destination address; and update the access control bits for memory pages of the destination address to indicate that the memory pages are restricted, wherein setting the access control bits to restricted prevents the NIC and a compute kernel of the hardware accelerator device from accessing the memory pages.Type: GrantFiled: March 1, 2024Date of Patent: May 13, 2025Assignee: INTEL CORPORATIONInventors: Reshma Lal, Sarbartha Banerjee
-
Patent number: 12299035Abstract: The technology provides a consistent hashing approach that can be used with many types of hash functions. This approach, called flip hashing, enables dynamic adjustment of a hash table while satisfying balance and monotonicity requirements. Flip hashing is particularly applicable to database and load rebalancing applications due to its low computational cost and ease of implementation. As computing resources are added to a system, keys are remapped evenly across the newly added resources, e.g., by one or more load-balancing or routing servers. This enables upscaling of the system to minimize hotspot issues. The computational cost for a flip hash approach is effectively constant and regardless of the number of resources. This can provide fast response times to queries and avoid overloading of routing servers.Type: GrantFiled: December 22, 2023Date of Patent: May 13, 2025Assignee: Datadog Inc.Inventors: Charles-Philippe Masson, Homin K. Lee
-
Patent number: 12299287Abstract: Techniques for providing host applications the ability to dynamically manage application-specific functionality of storage applications. The techniques include providing, on a storage system, a storage application such as a data object processing pipeline including a series of pipeline elements (PEs), and receiving, at the storage system from a host computer, a command containing parameters for configuring an application of a specified PE from among the series of PEs.Type: GrantFiled: December 15, 2022Date of Patent: May 13, 2025Assignee: Dell Products L.P.Inventors: Philippe Armangau, Alan L. Taylor, Vasu Subramanian
-
Patent number: 12293085Abstract: Some data storage devices have a plurality of memory dies that can be read in parallel for certain types of read requests. Read requests pertaining to a garbage collection operation are often generated sequentially and, thus, are not eligible for parallel execution in the memory dies. In an example data storage device presented herein, such read requests are consolidated and sent to the memory for execution in parallel across the memory dies.Type: GrantFiled: July 25, 2023Date of Patent: May 6, 2025Assignee: Sandisk Technologies, Inc.Inventors: Pradeep Seetaram Hegde, Ramanathan Muthiah, Nagaraj Dandigenahalli Rudrappa, Vimal Kumar Jain
-
Patent number: 12287971Abstract: In a method of operating a memory system disclosed, whether a first condition is satisfied is determined. The first condition is associated with free blocks and garbage collection (GC) target blocks from among a plurality of memory blocks. In response to the first condition being satisfied, a size of a data sample associated with executions of a host input/output request and GC is adjusted. The data sample is generated based on the adjusted size of the data sample. The data sample includes a downscaled current valid page count (VPC) ratio and the first number of previous host input/output request to GC processing ratios. A current host input/output request to GC processing ratio is calculated based on the data sample. The host input/output request and the GC are performed based on the current host input/output request to GC processing ratio.Type: GrantFiled: October 12, 2023Date of Patent: April 29, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Changho Choi, Young Bong Kim, Eun-Kyung Choi
-
Patent number: 12282429Abstract: An apparatus includes a processor core and a memory hierarchy. The memory hierarchy includes main memory and one or more caches between the main memory and the processor core. A plurality of hardware pre-fetchers are coupled to the memory hierarchy and a pre-fetch control circuit is coupled to the plurality of hardware pre-fetchers. The pre-fetch control circuit is configured to compare changes in one or more cache performance metrics over two or more sampling intervals and control operation of the plurality of hardware pre-fetchers in response to a change in one or more performance metrics between at least a first sampling interval and a second sampling interval.Type: GrantFiled: September 13, 2022Date of Patent: April 22, 2025Assignee: Huawei Technologies Co., Ltd.Inventors: Elnaz Ebrahimi, Ehsan Khish Ardestani Zadeh, Wei-Yu Chen, Liang Peng
-
Patent number: 12282423Abstract: Some data storage devices select blocks of memory from a free block pool and randomly allocate the blocks as primary and secondary blocks to redundantly store data in a write operation. However, some blocks, such as blocks on the edge of a plane, may not serve well as primary blocks. One example data storage device presented herein addresses this problem by allocating such blocks as secondary blocks instead of primary blocks.Type: GrantFiled: July 5, 2023Date of Patent: April 22, 2025Assignee: Sandisk Technologies, Inc.Inventors: Manoj M. Shenoy, Lakshmi Sowjanya Sunkavelli, Niranjani Rajagopal
-
Patent number: 12271314Abstract: A method includes determining, by a level one (L1) controller, to change a size of a L1 main cache; servicing, by the L1 controller, pending read requests and pending write requests from a central processing unit (CPU) core; stalling, by the L1 controller, new read requests and new write requests from the CPU core; writing back and invalidating, by the L1 controller, the L1 main cache. The method also includes receiving, by a level two (L2) controller, an indication that the L1 main cache has been invalidated and, in response, flushing a pipeline of the L2 controller; in response to the pipeline being flushed, stalling, by the L2 controller, requests received from any master; reinitializing, by the L2 controller, a shadow L1 main cache. Reinitializing includes clearing previous contents of the shadow L1 main cache and changing the size of the shadow L1 main cache.Type: GrantFiled: November 14, 2023Date of Patent: April 8, 2025Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, Naveen Bhoria, David Matthew Thompson, Neelima Muralidharan
-
Patent number: 12265474Abstract: Techniques are disclosed relating to dynamically allocating and mapping private memory for requesting circuitry. Disclosed circuitry may receive a private address and translate the private address to a virtual address (which an MMU may then translate to physical address to actually access a storage element). In some embodiments, private memory allocation circuitry is configured to generate page table information and map private memory pages for requests if the page table information is not already setup. In various embodiments, this may advantageously allow dynamic private memory allocation, e.g., to efficiently allocate memory for graphics shaders with different types of workloads. Disclosed caching techniques for page table information may improve performance relative to traditional techniques. Further, disclosed embodiments may facilitate memory consolidation across a device such as a graphics processor.Type: GrantFiled: October 19, 2023Date of Patent: April 1, 2025Assignee: Apple Inc.Inventors: Justin A. Hensley, Karl D. Mann, Yoong Chert Foo, Terence M. Potter, Frank W. Liljeros, Ralph C. Taylor
-
Patent number: 12248782Abstract: An apparatus employed in a processing device comprises a processor configured to process data of a predefined data structure. A memory fetch device is coupled to the processor and is configured to determine addresses of the packed data for the processor. The packed data is stored on a memory device that is coupled to the processor. The memory fetch device is further configured to provide output data based on the addresses of the packed data to the processor, where the output data is configured according to the predefine data structure.Type: GrantFiled: August 25, 2020Date of Patent: March 11, 2025Assignee: Infineon Technologies AGInventors: Andrew Stevens, Wolfgang Ecker, Sebastian Prebeck
-
Patent number: 12242378Abstract: Devices and techniques are disclosed wherein an end user can remotely trigger direct data management activities of a data storage device (DSD), such as creating a data snapshot, resetting a snapshot, and setting permissions at the DSD via a remote mobile device app interface.Type: GrantFiled: June 27, 2022Date of Patent: March 4, 2025Assignee: Sandisk Technologies, Inc.Inventors: Ramanathan Muthiah, Balaji Thraksha Venkataramanan
-
Patent number: 12229444Abstract: Methods, systems, and devices for command scheduling for a memory system are described. A memory system may be configured to analyze a received command during an initialization procedure for one or more components. In some examples, the memory system may initialize an interface and one or more processing elements as part of an initialization procedure upon transitioning from a first power mode to a second power mode. Accordingly, the command may be analyzed while the processing elements are being initialized such that, upon the processing elements being fully initialized, the command may be processed (e.g., executed).Type: GrantFiled: August 22, 2022Date of Patent: February 18, 2025Assignee: Micron Technology, Inc.Inventors: Domenico Francesco De Angelis, Crescenzo Attanasio, Carminantonio Manganelli