Patents Examined by Hashem Farrokh
-
Patent number: 12379873Abstract: The present disclosure generally relates to read and write operations utilizing barrier commands. Using barrier commands and a snapshot of doorbell states of submission queues (SQs), the necessary write commands to perform a read may be identified and executed to reduce any wait time of the host. As such, host delays during reads and writes are reduced. In absence of a barrier command, the host needs to wait for writes to complete before performing a read. When a barrier command is used, the host needs to wait for the barrier command to complete before performing a read. The controller will execute the post barrier reads only after completing the pre-barrier writes. As will be discussed herein, the controller completes the barrier command as soon as a doorbell snapshot is taken even though the pre-barrier writes may not yet be completed.Type: GrantFiled: July 12, 2023Date of Patent: August 5, 2025Assignee: Sandisk Technologies, Inc.Inventors: Amir Segev, Shay Benisty, Rotem Sela
-
Patent number: 12373366Abstract: In certain embodiments, a memory module includes a printed circuit board (PCB) having an interface that couples it to a host system for provision of power, data, address and control signals. First, second, and third buck converters receive a pre-regulated input voltage and produce first, second and third regulated voltages. A converter circuit reduces the pre-regulated input voltage to provide a fourth regulated voltage. Synchronous dynamic random access memory (SDRAM) devices are coupled to one or more regulated voltages of the first, second, third and fourth regulated voltages, and a voltage monitor circuit monitors an input voltage and produces a signal in response to the input voltage having a voltage amplitude that is greater than a threshold voltage.Type: GrantFiled: January 24, 2022Date of Patent: July 29, 2025Assignee: NETLIST, INC.Inventors: Chi-She Chen, Jeffrey C. Solomon, Scott H. Milton, Jayesh Bhakta
-
Patent number: 12373136Abstract: Systems, methods, and data storage devices for host storage command management for dynamically allocated floating namespaces are described. A data storage device may support multiple host namespaces allocated in its non-volatile storage medium and include a floating namespace pool that includes at least some data units from those host namespaces. Host storage commands to be processed using the floating namespace pool may be received and payload sizes may be determined. A next host storage command may be determined based on the relative payload sizes and executed using a data unit from the floating namespace pool, for example, based on allocating virtual command queues to the floating namespace pool and sorting the incoming host storage commands by payload size.Type: GrantFiled: July 20, 2023Date of Patent: July 29, 2025Assignee: Western Digital Technologies, Inc.Inventors: Pavan Gururaj, Dinesh Babu, Sridhar Sabesan
-
Patent number: 12373133Abstract: Methods, systems, and devices for memory operations are described. A host system may obtain data for writing to a memory system. The host system may send, to the memory system, an indication that the data is to be written to the memory system, and the memory system may remove invalid data at the memory system until the memory system has sufficient resources to store the data. Based on the memory system having sufficient resources, the memory system may delay background operations at the memory system until the data has been written to the memory system. The memory system may also create a restore point based on the memory system having sufficient resources and receiving the data. In other examples, the removal of invalid data at the memory system may be delayed until after the data is written to the memory system.Type: GrantFiled: April 24, 2024Date of Patent: July 29, 2025Assignee: Micron Technology, Inc.Inventors: Roberto Izzi, Reshmi Basu, Luca Porzio, Christian M. Gyllenskog
-
Patent number: 12373098Abstract: A victim management unit (MU) for performing a media management operation is identified. The victim MU stores valid data. A source cursor associated with the victim MU is identified from an ordered set of cursors. A target cursor following the source cursor in the ordered set of cursors referencing one or more available MUs is identified. In response to determining that the source cursor is a last cursor in the ordered set of cursors, the source cursor is utilized as the target cursor. The valid data is associated with the identified target cursor.Type: GrantFiled: June 28, 2024Date of Patent: July 29, 2025Assignee: Micron Technology, Inc.Inventor: Luca Bert
-
Patent number: 12366968Abstract: Implementations described herein relate to host device initiated low temperature thermal throttling. A memory device may receive, from a host device, a low temperature thermal throttling command that indicates for the memory device to initiate a thermal throttling operation based on a temperature of the memory device not satisfying a temperature threshold. The low temperature thermal throttling command may indicate an amount of dummy data to be moved from the host device to a particular location of the memory device associated with the thermal throttling operation. The memory device may perform the thermal throttling operation based on moving the dummy data from the host device to the particular location of the memory device. The memory device may complete the thermal throttling operation based on moving the amount of data from the host device to the particular location of the memory device.Type: GrantFiled: November 16, 2023Date of Patent: July 22, 2025Assignee: Micron Technology, Inc.Inventor: Marco Redaelli
-
Patent number: 12361270Abstract: A neural core, a neural processing device including same and a method for lauding data of a neural processing device are provided. The neural core comprises a processing unit configured to perform operations, an L0 memory configured to store input data and an LSU configured to perform a load task and a store task of data between the processing unit and the L0 memory, wherein the LSU comprises a local memory load unit configured to transmit the input data in the L0 memory to the processing unit, and the local memory load unit comprises a target decision module configured to identify and retrieve the input data in the L0 memory, a transformation logic configured to transform the input data and thereby generate transformed data and an output FIFO configured to receive the transformed data and transmit the transformed data to the processing unit in the received order.Type: GrantFiled: March 6, 2024Date of Patent: July 15, 2025Assignee: Rebellions Inc.Inventors: Jinseok Kim, Kyeongryeol Bong, Jinwook Oh, Yoonho Boo
-
Patent number: 12360898Abstract: Various example embodiments of a processor cache are presented herein. The processor cache may be configured to support redirection of memory blocks between cache lines, including between cache lines in different sets of the processor cache, thereby improving the efficiency of the processor cache by increasing the utilization of the processor cache and reducing cache misses for the processor cache. The redirection of memory blocks between cache lines may include redirection of a memory block from being stored in a first cache line of a first set of the processor cache which is the default set for the memory block (e.g., when the first set does not have any empty cache lines) to a second cache line of a second set of the processor cache which is not the default set for the memory block (e.g., the second set may be any set having at least one empty cache line).Type: GrantFiled: December 21, 2023Date of Patent: July 15, 2025Assignee: Nokia Solutions and Networks OyInventor: Pranjal Kumar Dutta
-
Patent number: 12360899Abstract: Techniques are disclosed relating to graphics processor data caches. In some embodiments, datapath executes instructions that operate on input operands from architectural registers. Data cache circuitry caches architectural register data for the datapath circuitry. Scoreboard circuitry tracks, for a given architectural register: map information that indicates whether the architectural register is mapped to an entry of the data cache circuitry and a pointer to the entry of the data cache circuitry. Tiered scoreboard circuitry and data storage circuitry may be implemented (e.g., to provide fast scoreboard access for active threads and to give a landing spot for long-latency data retrieval operations). Various disclosed techniques may improve cache performance, reduce power consumption, reduce area, or some combination thereof.Type: GrantFiled: January 11, 2024Date of Patent: July 15, 2025Assignee: Apple Inc.Inventors: Winnie W. Yeung, Zelin Zhang, Cheng Li, Hungse Cha, Leela Kishore Kothamasu
-
Patent number: 12346259Abstract: Cache resource prioritization for Heterogeneous Multi-Processing (HMP) architecture. One embodiment is an apparatus including HMP cores, a cache including cache partitions assigned to different levels of the HMP cores, a performance monitoring unit (PMU) configured to track a performance attribute for each of the cache partitions, and a controller. The controller is configured, for each of a plurality of successive time windows, to: obtain a performance value for each of the cache partitions based on the performance attribute tracked by the PMU, determine a predicted cache size for each of the cache partitions based on performance values obtained for previous time windows, calculate a cache size for each of the cache partitions by multiplying the predicted cache size with a weighted value, and direct a size adjustment of each of the cache partitions based on the calculated cache size.Type: GrantFiled: April 15, 2024Date of Patent: July 1, 2025Assignee: QUALCOMM INCORPORATEDInventors: Aiqun Yu, Yiwei Huang, Junwei Liao
-
Patent number: 12339774Abstract: Systems and methods of the present disclosure enable intelligent dynamic caching of data by accessing an activity history of historical electronic activity data entries associated with a user account, and utilizing a trained entity relevancy machine learning model to predict a degree of relevance of each entity associated with the historical electronic activity data entries in the activity history based at least in part on model parameters and activity attributes of each electronic activity data entry. A set of relevant entities are determined based at least in part on the degree of relevance of each entity. Pre-cached entities are identified based on pre-cached entity data records cached on the user device, and un-cached relevant entities from the set of relevant entities are identified based on the pre-cached entities. The cache on the user device is updated to cache the un-cached entity data records associated with the un-cached relevant entities.Type: GrantFiled: April 16, 2024Date of Patent: June 24, 2025Assignee: Capital One Services, LLCInventors: Shabnam Kousha, Lin Ni Lisa Cheng, Asher Smith-Rose, Joshua Edwards, Tyler Maiman
-
Patent number: 12321269Abstract: Techniques may include receiving a first request for a conformance check for a conformance pair, the conformance pair include a variable type and a particular protocol. The first request can identifying a first pointer. The technique can include determining a conformance check result is not cached for the conformance pair using the first pointer. In response to determining that the conformance check result is not cached for a variable, the electronic device may include performing the conformance check for the conformance pair and storing a result of the conformance check in an index table in persistent memory in association with at least a portion of bits in the first pointer. The technique can include referencing the index table on subsequent requests for a conformance check.Type: GrantFiled: December 20, 2022Date of Patent: June 3, 2025Assignee: Apple Inc.Inventors: Mohamadou A. Abdoulaye, Peter Cooper, Michael J. Ash, Davide Italiano, Nick Kledzik
-
Patent number: 12314590Abstract: Aspects of the present disclosure configure a system component, such as a memory sub-system controller, to provide superblock management based on memory component reliabilities.Type: GrantFiled: February 26, 2024Date of Patent: May 27, 2025Assignee: Micron Technology, Inc.Inventor: Tomer Eliash
-
Patent number: 12292835Abstract: Embodiments of this application provide a photographing method and related apparatus, which are applied to terminal technologies. The method includes: when the terminal device displays the photo previewing interface, frames are previewed in a cache queue; receiving and responding to the photo-taking operation in the previewing interface, The image from the cache queue is managed in undeletable state; After completing the algorithm processing based on the selected image, the selected image is deleted; The terminal device generates a photo based on the processed image. In this way, the selected image in the cache queue is managed undeletably, so that the selected image is not cleared when the terminal device generates the picture. Then, the cache queue may reserve the selected image for a long time, and the terminal device does not need to copy and store the selected image. Therefore, large memory occupation caused by copy is reduced, and save power.Type: GrantFiled: January 9, 2023Date of Patent: May 6, 2025Assignee: HONOR DEVICE CO., LTD.Inventor: Jirun Xu
-
Patent number: 12293078Abstract: This application discloses a storage space organization method and an electronic device. A kernel of the electronic device includes a file system and a block layer, and the method includes: monitoring, by the electronic device, input/output ports IOs through the block layer, and determining, by the block layer when there is IO release, whether all the IOs have been released; and updating, by the electronic device when all the IOs have been released, a state of the file system to an idle state through the block layer, to trigger the electronic device to perform first garbage collection processing through the file system, where the state of the file system includes the idle state and a busy state.Type: GrantFiled: April 19, 2023Date of Patent: May 6, 2025Assignee: Honor Device Co., Ltd.Inventors: Dachen Jin, Jian Dang
-
Patent number: 12282686Abstract: A first set of physical units of a storage device of a storage system is selected for performance of low latency access operations, wherein other access operations are performed by remaining physical units of the storage device. A determination as to whether a triggering event has occurred that causes a selection of a new set of physical units of the storage device for the performance of low latency access operations is made. A second set of physical units of the storage device is selected for the performance of low latency access operations upon determining that the triggering event has occurred.Type: GrantFiled: March 27, 2023Date of Patent: April 22, 2025Assignee: PURE STORAGE, INC.Inventors: Hari Kannan, Boris Feigin, Ying Gao, John Colgrove
-
Patent number: 12271316Abstract: A memory system includes a firmware unit and a cache module that includes a cache controller and a cache memory. The cache controller receives an I/O message that includes a local message ID (LMID) and data to be written to a logical drive (LD), stores the data in a cache segment (CS) row of the cache memory and sends an ID of the CS row to the firmware unit. The firmware unit, in response to receiving the ID of the CS row, acquires a timestamp and stores the timestamp to check against a cache flush timeout for the CS row. The firmware unit periodically checks cache flush timeout and in response to detecting the cache flush timeout, sends a flush command with the ID of the CS row to the cache controller. The cache controller, in response to receiving the flush command, flushes the first data of the CS row.Type: GrantFiled: February 16, 2023Date of Patent: April 8, 2025Assignee: Avago Technologies International Sales Pte. LimitedInventor: Arun Prakash Jana
-
Patent number: 12260121Abstract: A method of a flash memory device to be used in a storage device and coupled to a flash memory controller of the storage device through a specific communication interface, the flash memory device comprising an input/output (I/O) control circuit, a command register, an address register, a memory cell array at least having a first plane and a second plane which is different from the first plane, at least one address decoder, and a control circuit having a specific buffer, and the method comprises: buffering command information of a command signal, sent from the flash memory controller and transmitted through the I/O control circuit, into the command register; buffering address information of the command signal, sent from the flash memory controller and transmitted through the I/O control circuit, into the address register; and controlling the specific buffer storing a transmission history information of the specific communication interface.Type: GrantFiled: May 29, 2023Date of Patent: March 25, 2025Assignee: Silicon Motion, Inc.Inventors: Tsu-Han Lu, Hsiao-Chang Yen
-
Patent number: 12260095Abstract: A memory device includes a memory cell array including a plurality of memory cells connected to a plurality of word lines, a peripheral circuit configured to perform a program operation and a verify operation on selected memory cells among the plurality of memory cells, a compensation operation controller configured to determine a compensation value for a plurality of verify voltages according to a progress degree of the program operation and a target program state based on compensation information during the verify operation, and a verify operation controller configured to control the peripheral circuit to perform the verify operation on the selected memory cells among the plurality of memory cells based on the plurality of verify voltages and the compensation value.Type: GrantFiled: November 23, 2022Date of Patent: March 25, 2025Assignee: SK hynix Inc.Inventor: Ka Young Cho
-
Patent number: 12259825Abstract: Systems and methods are disclosed for concurrent support for multiple cache inclusivity schemes using low priority evict operations. For example, some methods may include, receiving a first eviction message having a lower priority than probe messages from a first inner cache; receiving a second eviction message having a higher priority than probe messages from a second inner cache; transmitting a third eviction message, determined based on the first eviction message, having the lower priority than probe messages to a circuitry that is closer to memory in a cache hierarchy; and, transmitting a fourth eviction message, determined based on the second eviction message, having the lower priority than probe messages to the circuitry that is closer to memory in the cache hierarchy.Type: GrantFiled: December 20, 2023Date of Patent: March 25, 2025Assignee: SiFive, Inc.Inventors: Wesley Waylon Terpstra, Richard Van, Eric Andrew Gouldey