Patents Examined by Nathan Sadler
-
Patent number: 12293085Abstract: Some data storage devices have a plurality of memory dies that can be read in parallel for certain types of read requests. Read requests pertaining to a garbage collection operation are often generated sequentially and, thus, are not eligible for parallel execution in the memory dies. In an example data storage device presented herein, such read requests are consolidated and sent to the memory for execution in parallel across the memory dies.Type: GrantFiled: July 25, 2023Date of Patent: May 6, 2025Assignee: Sandisk Technologies, Inc.Inventors: Pradeep Seetaram Hegde, Ramanathan Muthiah, Nagaraj Dandigenahalli Rudrappa, Vimal Kumar Jain
-
Patent number: 12287971Abstract: In a method of operating a memory system disclosed, whether a first condition is satisfied is determined. The first condition is associated with free blocks and garbage collection (GC) target blocks from among a plurality of memory blocks. In response to the first condition being satisfied, a size of a data sample associated with executions of a host input/output request and GC is adjusted. The data sample is generated based on the adjusted size of the data sample. The data sample includes a downscaled current valid page count (VPC) ratio and the first number of previous host input/output request to GC processing ratios. A current host input/output request to GC processing ratio is calculated based on the data sample. The host input/output request and the GC are performed based on the current host input/output request to GC processing ratio.Type: GrantFiled: October 12, 2023Date of Patent: April 29, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Changho Choi, Young Bong Kim, Eun-Kyung Choi
-
Patent number: 12282423Abstract: Some data storage devices select blocks of memory from a free block pool and randomly allocate the blocks as primary and secondary blocks to redundantly store data in a write operation. However, some blocks, such as blocks on the edge of a plane, may not serve well as primary blocks. One example data storage device presented herein addresses this problem by allocating such blocks as secondary blocks instead of primary blocks.Type: GrantFiled: July 5, 2023Date of Patent: April 22, 2025Assignee: Sandisk Technologies, Inc.Inventors: Manoj M. Shenoy, Lakshmi Sowjanya Sunkavelli, Niranjani Rajagopal
-
Patent number: 12282429Abstract: An apparatus includes a processor core and a memory hierarchy. The memory hierarchy includes main memory and one or more caches between the main memory and the processor core. A plurality of hardware pre-fetchers are coupled to the memory hierarchy and a pre-fetch control circuit is coupled to the plurality of hardware pre-fetchers. The pre-fetch control circuit is configured to compare changes in one or more cache performance metrics over two or more sampling intervals and control operation of the plurality of hardware pre-fetchers in response to a change in one or more performance metrics between at least a first sampling interval and a second sampling interval.Type: GrantFiled: September 13, 2022Date of Patent: April 22, 2025Assignee: Huawei Technologies Co., Ltd.Inventors: Elnaz Ebrahimi, Ehsan Khish Ardestani Zadeh, Wei-Yu Chen, Liang Peng
-
Patent number: 12271314Abstract: A method includes determining, by a level one (L1) controller, to change a size of a L1 main cache; servicing, by the L1 controller, pending read requests and pending write requests from a central processing unit (CPU) core; stalling, by the L1 controller, new read requests and new write requests from the CPU core; writing back and invalidating, by the L1 controller, the L1 main cache. The method also includes receiving, by a level two (L2) controller, an indication that the L1 main cache has been invalidated and, in response, flushing a pipeline of the L2 controller; in response to the pipeline being flushed, stalling, by the L2 controller, requests received from any master; reinitializing, by the L2 controller, a shadow L1 main cache. Reinitializing includes clearing previous contents of the shadow L1 main cache and changing the size of the shadow L1 main cache.Type: GrantFiled: November 14, 2023Date of Patent: April 8, 2025Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, Naveen Bhoria, David Matthew Thompson, Neelima Muralidharan
-
Patent number: 12265474Abstract: Techniques are disclosed relating to dynamically allocating and mapping private memory for requesting circuitry. Disclosed circuitry may receive a private address and translate the private address to a virtual address (which an MMU may then translate to physical address to actually access a storage element). In some embodiments, private memory allocation circuitry is configured to generate page table information and map private memory pages for requests if the page table information is not already setup. In various embodiments, this may advantageously allow dynamic private memory allocation, e.g., to efficiently allocate memory for graphics shaders with different types of workloads. Disclosed caching techniques for page table information may improve performance relative to traditional techniques. Further, disclosed embodiments may facilitate memory consolidation across a device such as a graphics processor.Type: GrantFiled: October 19, 2023Date of Patent: April 1, 2025Assignee: Apple Inc.Inventors: Justin A. Hensley, Karl D. Mann, Yoong Chert Foo, Terence M. Potter, Frank W. Liljeros, Ralph C. Taylor
-
Patent number: 12248782Abstract: An apparatus employed in a processing device comprises a processor configured to process data of a predefined data structure. A memory fetch device is coupled to the processor and is configured to determine addresses of the packed data for the processor. The packed data is stored on a memory device that is coupled to the processor. The memory fetch device is further configured to provide output data based on the addresses of the packed data to the processor, where the output data is configured according to the predefine data structure.Type: GrantFiled: August 25, 2020Date of Patent: March 11, 2025Assignee: Infineon Technologies AGInventors: Andrew Stevens, Wolfgang Ecker, Sebastian Prebeck
-
Patent number: 12242378Abstract: Devices and techniques are disclosed wherein an end user can remotely trigger direct data management activities of a data storage device (DSD), such as creating a data snapshot, resetting a snapshot, and setting permissions at the DSD via a remote mobile device app interface.Type: GrantFiled: June 27, 2022Date of Patent: March 4, 2025Assignee: Sandisk Technologies, Inc.Inventors: Ramanathan Muthiah, Balaji Thraksha Venkataramanan
-
Patent number: 12229444Abstract: Methods, systems, and devices for command scheduling for a memory system are described. A memory system may be configured to analyze a received command during an initialization procedure for one or more components. In some examples, the memory system may initialize an interface and one or more processing elements as part of an initialization procedure upon transitioning from a first power mode to a second power mode. Accordingly, the command may be analyzed while the processing elements are being initialized such that, upon the processing elements being fully initialized, the command may be processed (e.g., executed).Type: GrantFiled: August 22, 2022Date of Patent: February 18, 2025Assignee: Micron Technology, Inc.Inventors: Domenico Francesco De Angelis, Crescenzo Attanasio, Carminantonio Manganelli
-
Patent number: 12223206Abstract: A data storage device and method for dynamic controller memory buffer allocation are disclosed. In one embodiment, a data storage device is provided comprising a memory and a controller with a controller memory buffer. The controller is configured to communicate with the non-volatile memory and is further configured to configure a size of the controller memory buffer; receive a request from the host to modify the size of the controller memory buffer during operation of the data storage device; and determine whether to grant the request to modify the size of the controller memory buffer. Other embodiments are possible, and each of the embodiments can be used alone or together in combination.Type: GrantFiled: July 18, 2023Date of Patent: February 11, 2025Assignee: Sandisk Technologies, Inc.Inventors: Judah Gamliel Hahn, Alexander Bazarsky, Micha Yonin
-
Patent number: 12216937Abstract: When a program operation to a second memory of user data in a first memory is to be performed and the size of user data is smaller than a unit program size of the second memory, final data having a size equal to the unit program size may be produced by concatenating the user data and meta data, and the final data may then programmed into the second memory. The second memory may be non-volatile memory, and the meta data may be meta data for the second memory. In some cases, dummy data may also be concatenated with the user data and meta data to produce the final data. Accordingly, it is possible to perform the program operation according to the unit program size and improve the program operation efficiency by reducing the number of program operations performed to store meta data.Type: GrantFiled: May 30, 2023Date of Patent: February 4, 2025Assignee: SK hynix Inc.Inventor: Jung Woo Kim
-
Patent number: 12204450Abstract: A computing system performs shared cache allocation to allocate cache resources to groups of tasks. The computing system monitors the bandwidth at a memory hierarchy device that is at a next level to the cache in a memory hierarchy of the computing system. The computing system estimates a change in dynamic power from a corresponding change in the bandwidth before and after the cache resources are allocated. The allocation of the cache resources are adjusted according to an allocation policy that receives inputs including the estimated change in the dynamic power and a performance indication of task execution.Type: GrantFiled: August 17, 2023Date of Patent: January 21, 2025Assignee: MediaTek Inc.Inventors: Yu-Pin Chen, Jia-Ming Chen, Chien-Yuan Lai, Ya Ting Chang, Cheng-Tse Chen
-
Patent number: 12197729Abstract: A method, computer program product, and computing system for processing a plurality of input/output (IO) requests for a storage object of a storage system. A sampling interval may be determined for the plurality of IO requests for the storage object based upon, at least in part, a machine learning model processing the plurality of IO requests. The plurality of IO requests may be sampled using the determined sampling interval. The plurality of sampled IO requests may be processed using the machine learning model.Type: GrantFiled: May 2, 2023Date of Patent: January 14, 2025Assignee: Dell Products L.P.Inventors: Shaul Dar, Ramakanth Kanagovi, Guhesh Swaminathan, Rajan Kumar
-
Patent number: 12197347Abstract: Methods, apparatus, systems and articles of manufacture to reduce bank pressure using aggressive write merging are disclosed. An example apparatus includes a first cache storage; a second cache storage; a store queue coupled to at least one of the first cache storage and the second cache storage and operable to: receive a first memory operation; process the first memory operation for storing the first set of data in at least one of the first cache storage and the second cache storage; receive a second memory operation; and prior to storing the first set of data in the at least one of the first cache storage and the second cache storage, merge the first memory operation and the second memory operation.Type: GrantFiled: July 31, 2023Date of Patent: January 14, 2025Assignee: Texas Instruments IncorporatedInventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
-
Patent number: 12164920Abstract: The present disclosure describes techniques for offloading data processing and knowledge synthesis. A set of flags may indicate information about the memory pages in a first memory and may be manageable by at least one central processing unit (CPU). A memory page may be flushed to a second memory if the memory page is associated with a first flag. The first flag may indicate that the memory page is ready to be flushed to the second memory. The second memory may be configured to store a sequence of states of each of the memory pages. Data patterns and relations among the data patterns may be determined by data processing units (DPUs) based on the sequence of states of each of the memory pages. A knowledge base may be built in a third memory based on the data patterns and the relations among the data patterns.Type: GrantFiled: November 9, 2022Date of Patent: December 10, 2024Assignees: Lemon Inc., Beijing Youzhuju Network Technology Co. Ltd.Inventors: Viacheslav Dubeyko, Jian Wang
-
Patent number: 12164810Abstract: Systems and methods are disclosed including a processing device operatively coupled to memory device. The processing device performs operations comprising receiving a memory access command; responsive to detecting that the memory access command satisfies a trigger condition, recording, in a set of registers, data associated with a plurality of events performed by processing the memory access command; and responsive to detecting that the set of registers comprises the data, disabling write operations on the set of registers.Type: GrantFiled: June 29, 2023Date of Patent: December 10, 2024Assignee: Micron Technology, Inc.Inventor: Chandra M. Guda
-
Patent number: 12164791Abstract: Methods, systems, and devices for initializing memory systems are described. A memory system may transmit, to a host system over a first channel, signaling indicative of a first set of values for a set of parameters associated with communicating information over a second channel between a storage device of the memory system and a memory device of the memory system. The host system may transmit, to the memory system, additional signaling associated with the first set of values for the set of parameters. For instance, the host system may transmit a second set of values for the set of parameters, an acknowledgement to use the first set of values, or a command to perform a training operation on the second channel to identify a second set of values for the set of parameters. The memory system may communicate the information over the second channel based on the additional signaling.Type: GrantFiled: July 14, 2022Date of Patent: December 10, 2024Assignee: Micron Technology, Inc.Inventors: Erik V. Pohlmann, Scott Schlachter, Won Ho Choi
-
Patent number: 12141479Abstract: This application describes systems and methods for facilitating memory access in flash drives. An example method performed by a memory controller may include receiving, from a host, a write command comprising data to be written into a flash memory; splitting the data into a first portion and a second portion; storing the first portion into a static random-access memory (SRAM) in the memory controller; storing the second portion into a dynamic random-access memory (DRAM) communicatively coupled with the memory controller; initiating a configuration operation corresponding to the write command; fetching the first portion from the SRAM and the second portion from the DRAM in response to the flash translation layer indicating a ready status to store the data into the flash memory; combining the fetched first portion and the fetched second portion; and storing the combined first portion and the second portion into the flash memory.Type: GrantFiled: February 24, 2023Date of Patent: November 12, 2024Assignee: T-Head (Shanghai) Semiconductor Co., Ltd.Inventors: Jifeng Wang, Yuming Xu, Wentao Wu, Fei Xue, Xiang Gao, Jiajing Jin
-
Patent number: 12135617Abstract: Systems, methods, and computer readable media for preventing data loss at ephemeral and/or volatile storage of a local storage system are provided. These techniques may include synchronizing that state of the ephemeral storage system to a cloud-based storage system and capturing a cloud snapshot of the cloud-based storage system. In the event of a failure at the volatile storage, the cloud-based snapshot can be used as a restore point for the cloud-based storage system, the state of which can then be synchronized back to the volatile storage. Additionally, the local storage system includes durable storage for storing transaction logs. After synchronizing the state of the cloud-based storage system to the volatile storage, the local storage system can playback transactions in the transaction log to restore the volatile segment to the state at the time of the failure.Type: GrantFiled: May 24, 2022Date of Patent: November 5, 2024Assignee: TESSELL, INC.Inventors: Balasubrahmanyam Kuchibhotla, Uday Kiran Jonnala, Kamaldeep Singh Khanuja, Maneesh Rawat, Manish Pratap Singh, Bakul Banthia
-
Patent number: 12131054Abstract: A storage device includes a memory module including a memory device, a module board including a memory controller configured to control the memory device, and a memory connector disposed on one side of the module board. The storage device also includes a first enclosure disposed on a first surface of the memory module, a second enclosure disposed on a second surface opposite to the first surface of the memory module, and a first sensor disposed on the first enclosure and configured to detect a state and provide a signal for the state to the memory controller. The first enclosure includes a first long side extending in a first direction and a first short side extending in a second direction perpendicular to the first direction. A ratio of the first long side to the first short side ranges from 1.2 to 3.5.Type: GrantFiled: July 29, 2022Date of Patent: October 29, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Kyoung Eun Lee, Yusuf Cinar, Hyun Joon Yoo, Byung Il Lee