Patents Examined by Aracelis Ruiz
  • Patent number: 11675702
    Abstract: Prefetch circuitry generates prefetch requests to prefetch information to a cache, based on prediction information trained using a training table comprising training entries. A given training entry associates a program counter indication associated with a trigger training memory access, a region indication indicative of a memory address region comprising a target address specified by the trigger training memory access, corresponding prediction information trained based on subsequent training memory access requests specifying target addresses in the same region as the target address of the trigger training memory access, and first and second replacement policy information. The first replacement policy information is used for replacement of an entry with another entry for the same program counter indication but different region. The second replacement policy information is used for replacement of an entry with another entry for a different program counter indication.
    Type: Grant
    Filed: February 16, 2022
    Date of Patent: June 13, 2023
    Assignee: Arm Limited
    Inventors: Ugo Castorina, Damien Matthieu Valentin Cathrine
  • Patent number: 11657865
    Abstract: A dynamic memory system having multiple memory regions respectively storing multiple types of data. A controller coupled to the dynamic memory system via a communication channel and operatively to: monitor usage of a communication bandwidth of the communication channel; determine to reduce memory bandwidth penalty caused by refreshing the dynamic memory system; and in response, reduce a refresh rate of at least one of the memory regions based on a type of data stored in the respective memory region.
    Type: Grant
    Filed: January 14, 2021
    Date of Patent: May 23, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Gil Golov
  • Patent number: 11656993
    Abstract: The present disclosure generally relates to prefetching data from one or more CPUs prior to the data being requested by a host device. The prefetched data is prefetched from memory and stored in cache. If a host device requests data that is not already in cache, then a determination is made regarding whether the data is scheduled to be written into cache. If the data is not in cache and is not scheduled to be written into cache, then the data is retrieved from memory and delivered to the host device. If the data is scheduled to be written into cache, or is currently being written into cache, then the request to retrieve the data is delayed or scheduled to retrieve the data once the data is in cache. If the data is already in cache, the data is delivered to the host device.
    Type: Grant
    Filed: July 5, 2022
    Date of Patent: May 23, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventor: Kevin James Wendzel
  • Patent number: 11650934
    Abstract: One example method includes a cache eviction operation. Entries in a cache are maintained in an entry list that includes a recent list and a frequent list. When an eviction operation is initiated or triggered, timestamps of last access for the entries are adjusted by corresponding adjustment values. Candidates for eviction are identified based on the adjusted timestamps of last access. At least some of the candidates are evicted from the cache.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: May 16, 2023
    Assignee: DELL PRODUCTS L.P.
    Inventors: Keyur B. Desai, Xiaobing Zhang
  • Patent number: 11640321
    Abstract: According to the disclosure, a memory allocation method and device and recording medium in a multi-core processor system are disclosed. According to an embodiment, a method for allocating a shared variable to a memory in a multi-core processor system comprises mapping each task to a core, allocating unshared variables to memories times of access, to which are sequentially minimized, in descending order of actual variable access count, calculating an actual variable access count per core, selecting a core with a highest actual variable access count, and allocating a shared variable to a memory of the selected core.
    Type: Grant
    Filed: August 19, 2020
    Date of Patent: May 2, 2023
    Assignee: Research & Business Foundation Sungkyunkwan University
    Inventors: Jaewook Jeon, Junyoung Moon, Doyeon Kim, Minhee Jo, Jaewan Park
  • Patent number: 11630589
    Abstract: Aspects of the innovations herein are consistent with a storage system for storing variable sized objects. According to certain implementations, the storage system may be a transaction-based system that uses variable sized objects to store data, and/or may be implemented using data stores, such as arrays disks arranged in ranks. In some exemplary implementations, each rank may include multiple stripes, each stripe may be read and written as a convenient unit for maximum performance, and/or a rank manager may be provided to dynamically configure the ranks. In certain implementations, the storage system may include a stripe space table that contains entries describing the amount of space used in each stripe. Further, an object map may provide entries for each object in the storage system describing the location, the length and/or version of the object.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: April 18, 2023
    Assignee: Primos Storage Technology, LLC
    Inventor: Robert E. Cousins
  • Patent number: 11620225
    Abstract: A circuit and corresponding method map memory addresses onto cache locations within set-associative (SA) caches of various cache sizes. The circuit comprises a modulo-arithmetic circuit that performs a plurality of modulo operations on an input memory address and produces a plurality of modulus results based on the plurality of modulo operations performed. The plurality of modulo operations performed are based on a cache size associated with an SA cache. The circuit further comprises a multiplexer circuit and an output circuit. The multiplexer circuit outputs selected modulus results by selecting modulus results from among the plurality of modulus results produced. The selecting is based on the cache size. The output circuit outputs a cache location within the SA cache based on the selected modulus results and the cache size. Such mapping of the input memory address onto the cache location is performed at a lower cost relative to a general-purpose divider.
    Type: Grant
    Filed: July 8, 2022
    Date of Patent: April 4, 2023
    Assignee: Marvell Asia Pte Ltd
    Inventor: Albert Ma
  • Patent number: 11615028
    Abstract: A method, computer program product, and computing system for receiving a flush request for a metadata page stored in a storage array of a multi-node storage system. The flush request may be queued on a flush request lock queue on at least one node of the multi-node storage system. One or more flush requests may be processed, via multiple nodes of the multi-node storage system, on the metadata page based upon, at least in part, the flush request lock queue.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: March 28, 2023
    Assignee: EMC IP Holding Company, LLC
    Inventors: Jenny Derzhavetz, Vladimir Shveidel, Dror Zalstein, Bar David
  • Patent number: 11609860
    Abstract: In various embodiments, a computing system includes, for example, a plurality of processing units that share access to a system cache. A cache management application receives, for example, resource savings information for each processing unit. The resource savings information indicates, for example, amounts of a resource (e.g., power) that are saved when different units of the system cache are allocated to a processing unit. The cache management application determines, for example, the number of units of system cache to allocate to each processing unit based on the received resource savings information.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: March 21, 2023
    Assignee: NVIDIA CORPORATION
    Inventor: Arnab Banerjee
  • Patent number: 11604739
    Abstract: A conditional direct memory access (DMA) channel activation system for executing a complex data transfer in a system-on-chip, comprising: a look-up table constructed and arranged to store elements of an activation profile; and a trigger circuit that controls a DMA transaction according to the activation profile of the look-up table.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: March 14, 2023
    Assignee: NXP USA, Inc.
    Inventors: Viktor Fellinger, Osvaldo Israel Romero Cortez, John Mitchell
  • Patent number: 11604735
    Abstract: Aspects of a storage device are provided that allow a controller to leverage cache to minimize occurrence of HMB address overlaps between different HMB requests. The storage device may include a cache and a controller coupled to the cache. The controller may store in the cache, in response to a HMB read request, first data from a HMB at a first HMB address. The controller may also store in the cache, in response to an HMB write request, second data from the HMB at a second HMB address. The controller may refrain from processing subsequent HMB requests in response to an overlap of the first HMB address with an address range including the second HMB address, and the controller may resume processing the subsequent HMB requests after the first data is stored. As a result, turnaround time delays for HMB requests may be reduced and performance may be improved.
    Type: Grant
    Filed: December 2, 2021
    Date of Patent: March 14, 2023
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Amir Segev, Dinesh Kumar Agarwal, Vijay Sivasankaran, Nava Eisenstein, Jonathan Journo
  • Patent number: 11604733
    Abstract: An apparatus has processing circuitry to perform data processing, at least one architectural register to store at least one partition identifier selection value which is programmable by software processed by the processing circuitry; a set-associative cache comprising a plurality of sets each comprising a plurality of ways; and partition identifier selecting circuitry to select, based on the at least one partition identifier selection value stored in the at least one architectural register, a selected partition identifier to be specified by a cache access request for accessing the set-associative cache.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: March 14, 2023
    Assignee: Arm Limited
    Inventor: Steven Douglas Krueger
  • Patent number: 11599473
    Abstract: Aspects of the present disclosure relate to an apparatus comprising prefetch information storage circuitry and prefetch training circuitry. The prefetch training circuitry comprises a plurality of entries, and is configured to: allocate a given entry to a given data address region; receive access information indicative of data accesses within the given data address region; based on said access information, train prefetch information associated with the given data address region, the prefetch information being indicative of a pattern of said data accesses within the given data address region; and responsive to an eviction condition being met after an elapsed period, since said allocation of the given entry, has exceeded a threshold, perform an eviction comprising transferring the prefetch information associated with the given data address region to the prefetch information storage circuitry.
    Type: Grant
    Filed: October 20, 2021
    Date of Patent: March 7, 2023
    Assignee: Arm Limited
    Inventors: Devin S Lafford, Alexander Cole Shulyak
  • Patent number: 11593263
    Abstract: Technologies for addressing individual bits in memory include a device having a memory that includes partitions that each have tiles, in which each tile stores an individual bit. The device also includes circuitry to receive a request to access (e.g., read or write) a sequence of bits in a partition. The request specifies a logical row or column address. A corresponding tile is determined from the logical row or column address and for each bit in the sequence. The corresponding tile is accessed to read or write the bit therein.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: February 28, 2023
    Assignee: Intel Corporation
    Inventors: Jawad B. Khan, Richard Coulson
  • Patent number: 11580027
    Abstract: Graphics processors for implementing multi-tile memory management are disclosed. In one embodiment, a graphics processor includes a first graphics device having a local memory, a second graphics device having a local memory, and a graphics driver to provide a single virtual allocation with a common virtual address range to mirror a resource to each local memory of the first and second graphics devices.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: February 14, 2023
    Assignee: Intel Corporation
    Inventors: Zack S. Waters, Travis Schluessler, Michael Apodaca, Ankur Shah
  • Patent number: 11567861
    Abstract: In an embodiment, a system may support programmable hashing of address bits at a plurality of levels of granularity to map memory addresses to memory controllers and ultimately at least to memory devices. The hashing may be programmed to distribute pages of memory across the memory controllers, and consecutive blocks of the page may be mapped to physically distant memory controllers. In an embodiment, address bits may be dropped from each level of granularity, forming a compacted pipe address to save power within the memory controller. In an embodiment, a memory folding scheme may be employed to reduce the number of active memory devices and/or memory controllers in the system when the full complement of memory is not needed.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: January 31, 2023
    Assignee: Apple Inc.
    Inventors: Steven Fishwick, Jeffry E. Gonion, Per H. Hammarlund, Eran Tamari, Lior Zimet, Gerard R. Williams, III
  • Patent number: 11567876
    Abstract: Cache slots on a storage system may be shared between entities processing write operations for logical storage unit (LSU) tracks and entities performing remote replication for write operations for the LSU tracks. If a new write operation is received on a first storage system (S1) for a track of an LSU (R1) when the cache slot mapped to the R1 track is locked by a process currently transmitting data of the cache slot to a second storage system (S2), a new cache slot may be allocated to the R1 track, the data of the original cache slot copied to the new cache slot, and the new write operation for the R1 track initiated on S1 using the new cache slot; while the data of the original cache slot is independently, and perhaps concurrently, transmitted to S2 to be replicated in R2, the LSU on S2 that is paired with R1.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: January 31, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Bhaskar Bora, Benjamin Yoder
  • Patent number: 11567860
    Abstract: A memory system includes a storage medium and a controller. The storage medium includes a plurality of memory regions. The controller stores data corresponding to a write request into a memory region of a random attribute or a memory region of a sequential attribute among the memory regions and to update logical-to-physical (L2P) information corresponding to the stored data, and updates, when storing the data into the memory region of the random attribute, physical-to-logical (P2L) information corresponding to the stored data within a P2L table of the memory region of the random attribute.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: January 31, 2023
    Assignee: SK hynix Inc.
    Inventor: Hye Mi Kang
  • Patent number: 11567897
    Abstract: Systems, devices, apparatuses, components, methods, and techniques for predicting user and media-playback device states are provided. Systems, devices, apparatuses, components, methods, and techniques for representing cached, user-selected, and streaming content are also provided.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: January 31, 2023
    Assignee: Spotify AB
    Inventors: Simon Hofverberg, Fredrik Schmidt, Johan Oskarsson, Ariel Marcus, Chris Doyle, Joseph Tam, Minchull Kim
  • Patent number: 11561901
    Abstract: A data processing system includes a plurality of processor cores each supported by a respective one of a plurality of vertical cache hierarchies. Based on receiving on a system fabric a cache injection request requesting injection of a data into a cache line identified by a target real address, the data is written into a cache in a first vertical cache hierarchy among the plurality of vertical cache hierarchies. Based on a value in a field of the cache injection request, a distribute field is set in a directory entry of the first vertical cache hierarchy. Upon eviction of the cache line the first vertical cache hierarchy, a determination is made whether the distribute field is set. Based on determining the distribute field is set, a lateral castout of the cache line from the first vertical cache hierarchy to a second vertical cache hierarchy is performed.
    Type: Grant
    Filed: August 4, 2021
    Date of Patent: January 24, 2023
    Assignee: International Business Machines Corporation
    Inventors: Derek E. Williams, Guy L. Guthrie, Bernard C. Drerup, Hugh Shen, Alexander Michael Taft, Luke Murray, Richard Nicholas