Patents Examined by Jae U Yu
  • Patent number: 11301383
    Abstract: A method is described for managing the issuance and fulfillment of memory commands. The method includes receiving, by a cache controller of a memory subsystem, a first memory command corresponding to a set of memory devices. In response, the cache controller adds the first memory command to a cache controller command queue such that the cache controller command queue stores a first set of memory commands and sets a priority of the first memory command to either a high or low priority based on (1) whether the first memory command is of a first or second type and (2) an origin of the first memory command.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: April 12, 2022
    Assignee: MICRON TECHNOLOGY, INC.
    Inventors: Patrick A. La Fratta, Cagdas Dirik, Laurent Isenegger, Robert M. Walker
  • Patent number: 11294595
    Abstract: An adaptive-feedback-based read-look-ahead management system and method are provided. In one embodiment, a method for stream management is presented that is performed in a storage system. The method comprises performing a read look ahead operation for each of a plurality of streams; determining a success rate of the read look ahead operation of each of the plurality of streams; and allocating more of the memory for a stream that has a success rate above a threshold than for a stream that has a success rate below the threshold. Other embodiments are provided.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: April 5, 2022
    Assignee: Western Digital Technologies, Inc.
    Inventors: Shay Benisty, Ariel Navon, Alexander Bazarsky
  • Patent number: 11287985
    Abstract: A data storage network may have multiple data storage devices that each consist of a device buffer. A network buffer and buffer circuit can be found in a network controller with the buffer circuit arranged to divide and store data associated with a data access request in the network buffer and the device buffer of the first data storage device.
    Type: Grant
    Filed: May 17, 2017
    Date of Patent: March 29, 2022
    Inventors: Phillip R. Colline, Michael Barrell
  • Patent number: 11288205
    Abstract: A processor maintains an access log indicating a stream of cache misses at a cache of the processor. In response to each of at least a subset of cache misses at the cache, the processor records a corresponding entry in the access log, indicating a physical memory address of the memory access request that resulted in the corresponding miss. In addition, the processor maintains an address translation log that indicates a mapping of physical memory addresses to virtual memory addresses. In response to an address translation (e.g., a page walk) that translates a virtual address to a physical address, the processor stores a mapping of the physical address to the corresponding virtual address at an entry of the address translation log. Software executing at the processor can use the two logs for memory management.
    Type: Grant
    Filed: June 23, 2015
    Date of Patent: March 29, 2022
    Assignees: Advanced Micro Devices, Inc., ATI TECHNOLOGIES ULC
    Inventors: Benjamin T. Sander, Mark Fowler, Anthony Asaro, Gongxian Jeffrey Cheng, Mike Mantor
  • Patent number: 11289148
    Abstract: A memory control apparatus controls access to a DRAM having a plurality of banks. The apparatus comprises a first generating unit configured to generate an access command in accordance with an access request for the DRAM and store the access command in a buffer; a second generating unit configured to generate a bank-designated refresh request for the DRAM; and an issuing unit configured to issue a DRAM command to the DRAM based on an access command stored in the buffer and a refresh request generated by the second generating unit. The second generating unit determines a bank for which the refresh request is generated, based on an access time for each bank of the DRAM by not less than one access command stored in the buffer.
    Type: Grant
    Filed: September 16, 2020
    Date of Patent: March 29, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Makoto Fujiwara
  • Patent number: 11288191
    Abstract: An apparatus to facilitate memory flushing is disclosed. The apparatus comprises a cache memory, one or more processing resources, tracker hardware to dispatch workloads for execution at the processing resources and to monitor the workloads to track completion of the execution, range based flush (RBF) hardware to process RBF commands and generate a flush indication to flush data from the cache memory and a flush controller to receive the flush indication and perform a flush operation to discard data from the cache memory at an address range provided in the flush indication.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: March 29, 2022
    Assignee: Intel Corporation
    Inventors: Hema Chand Nalluri, Aditya Navale, Altug Koker, Brandon Fliflet, Jeffery S. Boles, James Valerio, Vasanth Ranganathan, Anirban Kundu, Pattabhiraman K
  • Patent number: 11275690
    Abstract: Techniques are disclosed for transferring a message between a sender agent and a receiver agent via a shared memory having a main memory and a cache. Feedback data indicative of a number of read messages in the shared memory is generated by the receiver agent. The feedback data is sent from the receiver agent to the sender agent. A number of unread messages in the shared memory is estimated by the sender agent based on the number of read messages. A threshold for implementing a caching policy is set by the sender agent based on the feedback data. The message is designated as cacheable if the number of unread messages is less than the threshold and as non-cacheable if the number of unread messages is greater than the threshold. The message is written to the shared memory based on the designation.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: March 15, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Michael Zuzovski, Ofer Naaman, Adi Habusha
  • Patent number: 11275684
    Abstract: Systems and methods are disclosed for employing a media read cache in a storage device. In certain embodiments, an, an apparatus may comprise a data storage drive including a volatile read cache, and a disc memory including a primary data storage region of the storage device configured for long-term storage of data via persistent logical block address to physical block address mapping, and a media read cache region configured to store a copy of data from the volatile read cache. The data storage drive may be configured to perform a read operation including: retrieve read data from the volatile read cache based on determining that the read data is available in the volatile read cache, and retrieve the read data from the media read cache based on determining that the read data is not available in the volatile read cache and is available in the media read cache.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: March 15, 2022
    Assignee: Seagate Technology LLC
    Inventors: Raye A. Sosseh, Brian T. Edgar, Mark A. Gaertner
  • Patent number: 11269520
    Abstract: A system is disclosed. The system may include a computer system, which may include a processor that may execute instructions of an application that accesses an object using an object command, and a memory storing the instructions of the application. The computer system may also include a conversion module to convert the object command to a key-value (KV) command. Finally, the system may include a storage device storing data for the object and processing the object using the KV command.
    Type: Grant
    Filed: January 29, 2020
    Date of Patent: March 8, 2022
    Inventors: Anand Subramanian, Oscar Prem Pinto
  • Patent number: 11263133
    Abstract: Coherency control circuitry (10) supports processing of a safe-speculative-read transaction received from a requesting master device (4). The safe-speculative-read transaction is of a type requesting that target data is returned to a requesting cache (11) of the requesting master device (4) while prohibiting any change in coherency state associated with the target data in other caches (12) in response to the safe-speculative-read transaction. In response, at least when the target data is cached in a second cache associated with a second master device, at least one of the coherency control circuitry (10) and the second cache (12) is configured to return a safe-speculative-read response while maintaining the target data in the same coherency state within the second cache. This helps to mitigate against speculative side-channel attacks.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: March 1, 2022
    Assignee: Arm Limited
    Inventors: Andreas Lars Sandberg, Stephan Diestelhorst, Nikos Nikoleris, Ian Michael Caulfield, Peter Richard Greenhalgh, Frederic Claude Marie Piry, Albin Pierrick Tonnerre
  • Patent number: 11262951
    Abstract: Apparatuses and methods related to generating memory characteristic based access commands generating the access commands can include providing a first access command to a memory system of a plurality of memory systems, receiving, at a host coupled to the memory system, data corresponding to characteristics of a memory device of the memory system from a controller of the memory system, where the characteristics are based at least in part on processing of the first access command. Generating access commands can also include generating, at the host, a second access command based on the data and transmitting the second access command to at least the memory system.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: March 1, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Honglin Sun
  • Patent number: 11256620
    Abstract: System and methods are disclosed include a memory device and a processing device coupled to the memory device. The processing device can determine an amount of valid blocks in a memory device of a memory sub-system. The processing device can then determine a surplus amount of valid blocks on the memory device based on the amount of valid blocks. The processing device can then configure a size of a cache of the memory device based on the surplus amount of valid blocks.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: February 22, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Kevin R. Brandt, Peter Feeley, Kishore Kumar Muchherla, Yun Li, Sampath K. Ratnam, Ashutosh Malshe, Christopher S. Hale, Daniel J. Hubbard
  • Patent number: 11249907
    Abstract: Systems, apparatuses, and methods related to a write-back cache policy to limit data transfer time to a memory device are described. A controller can orchestrate performance of operations to write data to a cache according to a write-back policy and write addresses associated with the data to a buffer. The controller can further orchestrate performance of operations to limit an amount of data stored by the buffer and/or a quantity of addresses stored in the buffer. In response to a power failure, the controller can cause the data stored in the cache to be flushed to a persistent memory device in communication with the cache.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: February 15, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Tony M. Brewer
  • Patent number: 11243884
    Abstract: A method of prefetching target data includes, in response to detecting a lock-prefixed instruction for execution in a processor, determining a predicted target memory location for the lock-prefixed instruction based on control flow information associating the lock-prefixed instruction with the predicted target memory location. Target data is prefetched from the predicted target memory location to a cache coupled with the processor, and after completion of the prefetching, the lock-prefixed instruction is executed in the processor using the prefetched target data.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: February 8, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Susumu Mashimo, John Kalamatianos
  • Patent number: 11243767
    Abstract: A caching device, an instruction cache, a system for processing an instruction, a method and apparatus for processing data and a medium are provided. The caching device includes a first queue, a second queue, a write port group, a read port, a first pop-up port, a second pop-up port and a press-in port. The is configured to write cache data into a set storage address in the first queue and/or the second queue; the read port is configured to read all cache data from the first queue and/or the second queue at one time; the press-in port is configured to press cache data into the first queue and/or the second queue; the first pop-up port is configured to pop up cache data from the first queue; and the second pop-up port is configured to pop up cache data from the second queue.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: February 8, 2022
    Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.
    Inventors: Chao Tang, Xueliang Du, Yingnan Xu, Kang An
  • Patent number: 11221952
    Abstract: In vSAN (virtual Storage Area Network) systems, pooled storage resources may be organized into logical disk groups. One drive of a disk group may be designated for caching storage operations directed at the remaining drives of the disk group that provide permanent storage. Each cache drive is partitioned into an allocation for read operations and an allocation for write operations. Embodiments provide the vSAN system with use of virtual cache that is backed by the cache drives of each disk group in the vSAN system. Embodiments adjust the cache memory allocations for individual cache drives of each disk group, while utilizing the virtual cache that adheres to a fixed cache allocation ratio required by the vSAN system. The number and type of cache misses by each of the individual cache drives is monitored and used to adjust the sizes of the read and write cache allocations in each cache drive.
    Type: Grant
    Filed: August 4, 2020
    Date of Patent: January 11, 2022
    Assignee: Dell Products, L.P.
    Inventors: Vaideeswaran Ganesan, Deepaganesh Paulraj, Vinod P S, Ankit Singh
  • Patent number: 11216374
    Abstract: A router device may receive a request for access to a file from a user device, wherein a master version of the file is stored in a data structure associated with a server device. The router device may generate, based on the request, a copy of a cached version of the file, wherein the cached version of the file is stored in a data structure associated with the router device. The router device may send the copy of the cached version of the file to the user device.
    Type: Grant
    Filed: January 14, 2020
    Date of Patent: January 4, 2022
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Jonathan Emerson Hirko, Rory Liam Connolly, Wei G. Tan, Nikolay Kulikaev, Manian Krishnamoorthy
  • Patent number: 11216382
    Abstract: A cache system may maintain size and/or request rate metrics for objects in a lower level cache and for objects in a higher level cache. When an L1 cache does not have an object, it requests the object from an L2 cache and sends to the L2 cache aggregate size and request rate metrics for objects in the L1 cache. The L2 cache may obtain a size metric and a request rate metric for the requested object and then determine, based on the aggregate size and request rate metrics for the objects in the L1 cache and the size metric and the request rate metric for the requested object in the L2 cache, an indication of whether or not the L1 cache should cache the requested object. The L2 cache provides the object and the indication to the L1 cache.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: January 4, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Karthik Uthaman, Ronil Sudhir Mokashi, Prashant Verma
  • Patent number: 11216362
    Abstract: A data storage device includes a nonvolatile memory device including an address mapping table; a memory including a sequential map table in which sequential map entries for consecutive logical block addresses among logical block addresses are stored, the logical block addresses being received with write requests from a host device; and a processor configured to read one or more map segments, including logical block addresses of which mapping information is to be updated, from the address mapping table when a map update operation is triggered, store the read one or more map segments in the memory, sequentially change physical block addresses mapped to the respective logical block addresses to be updated, using a first sequential map entry including the logical block addresses to be updated which are stored in the sequential map table, and store the changed physical block addresses in the memory.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: January 4, 2022
    Assignee: SK hynix Inc.
    Inventors: Young Ick Cho, Byeong Gyu Park, Sung Kwan Hong
  • Patent number: 11210234
    Abstract: A processor includes a cache having two or more test regions and a larger non-test region. The processor further includes a cache controller that applies different cache replacement policies to the different test regions of the cache, and a performance monitor that measures performance metrics for the different test regions, such as a cache hit rate at each test region. Based on the performance metrics, the cache controller selects a cache replacement policy for the non-test region, such as selecting the replacement policy associated with the test region having the better performance metrics among the different test regions. The processor deskews the memory access measurements in response to a difference in the amount of accesses to the different test regions exceeding a threshold.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: December 28, 2021
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Paul Moyer, John Kelley