With Multilevel Cache Hierarchies (epo) Patents (Class 711/E12.024)
  • Patent number: 11954497
    Abstract: A method and system for moving data from a source memory to a destination memory by a processor are disclosed. The processor has a plurality of registers and the source memory stores a sequence of instructions that include one or more load instructions and one or more store instructions. The processor moves the load instructions from the source memory to the destination memory. Then, the processor initiates execution of the load instructions from the destination memory in order to load the data from the source memory to one or more registers in the processor. Execution then returns to the sequence of instructions stored in the source memory, and the processor stores the data from the registers to the destination memory.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: April 9, 2024
    Assignee: Nordic Semiconductor ASA
    Inventor: Chris Smith
  • Patent number: 11954034
    Abstract: A system, method, and storage medium are provided. The system includes a real-time domain including a real-time cache and a non-real-time domain including a non-real-time cache. The system is configured to implement a cache coherency protocol by indicating that a cache line may be shared between the real-time cache and the non-real-time cache.
    Type: Grant
    Filed: March 28, 2022
    Date of Patent: April 9, 2024
    Assignee: WOVEN BY TOYOTA, INC.
    Inventor: Jean-Francois Bastien
  • Patent number: 11947457
    Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.
    Type: Grant
    Filed: November 22, 2022
    Date of Patent: April 2, 2024
    Assignee: Apple Inc.
    Inventors: James Vash, Gaurav Garg, Brian P. Lilly, Ramesh B. Gunna, Steven R. Hutsell, Lital Levy-Rubin, Per H. Hammarlund, Harshavardhan Kaushikkar
  • Patent number: 11907122
    Abstract: The disclosure relates to technology for up-evicting cache lines. An apparatus comprises a hierarchy of caches comprising a first cache having a first cache controller and a second cache having a second cache controller. The first cache controller is configured to store cache lines evicted from a first processor group to the first cache and to down-evict cache lines from the first cache to the second cache. The second cache controller is configured to store cache lines evicted from a second processor group into the second cache, to up-evict a first cache line from the second cache to the first cache in response to an eviction of a second cache line from the second processor group to the second cache, and to provide the up-evicted first cache line from the first cache to the second processor group in response to a request from the second processor group.
    Type: Grant
    Filed: August 12, 2022
    Date of Patent: February 20, 2024
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yuejian Xie, Qian Wang, Xingyu Jiang
  • Patent number: 11874323
    Abstract: Disclosed is a JTAG-based burning device, including controllable switches arranged between a TDI terminal of a JTAG host and a first chip, and between two adjacent chips, and further including a master controllable switch module arranged between each chip and a TDO terminal of the JTAG host, wherein the JTAG host may, according to a received burning instruction, control corresponding input terminals of the controllable switches to be connected to corresponding output terminals and also control an output terminal of the master controllable switch module to be connected to the corresponding input terminal. Obviously, a JTAG chain can be automatically adjusted by controlling the connection relationship between input and output terminals of the corresponding switches by only building a circuit, so that firmware burning on different chips or chip combinations is realized without manual adjustment, thereby improving the test efficiency, and simplifying the circuit structure.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: January 16, 2024
    Assignee: Inspur Suzhou Intelligent Technology Co., Ltd.
    Inventor: Peng Wang
  • Patent number: 11847062
    Abstract: In response to eviction of a first clean data block from an intermediate level of cache in a multi-cache hierarchy of a processing system, a cache controller accesses an address of the first clean data block. The controller initiates a fetch of the first clean data block from a system memory into a last-level cache using the accessed address.
    Type: Grant
    Filed: December 16, 2021
    Date of Patent: December 19, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Tarun Nakra, Jay Fleischman, Gautam Tarasingh Hazari, Akhil Arunkumar, William L. Walker, Gabriel H. Loh, John Kalamatianos, Marko Scrbak
  • Patent number: 11847459
    Abstract: Systems and methods related to direct swap caching with zero line optimizations are described. A method for managing a system having a near memory and a far memory comprises receiving a request from a requestor to read a block of data that is either stored in the near memory or the far memory. The method includes analyzing a metadata portion associated with the block of data, the metadata portion comprising: both (1) information concerning whether the near memory contains the block of data or whether the far memory contains the block of data and (2) information concerning whether a data portion associated with the block of data is all zeros. The method further includes instead of retrieving the data portion from the far memory, synthesizing the data portion corresponding to the block of data to generate a synthesized data portion and transmitting the synthesized data portion to the requestor.
    Type: Grant
    Filed: April 12, 2022
    Date of Patent: December 19, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ishwar Agarwal, George Chrysos, Oscar Rosell Martinez, Yevgeniy Bak
  • Patent number: 11844495
    Abstract: Systems and methods for detecting an error in a surgical system. The surgical system includes a manipulator with a base and a plurality of links and the manipulator supports a surgical tool. The system includes a navigation system with a tracker and a localizer to monitor a state of the tracker. Controller(s) determine values of a first transform between a state of the base of the manipulator and a state of one or both of the localizer and the tracker of the navigation system. The controller(s) determine values of a second transform between the state of the localizer and the state of the tracker. The controller(s) combine values of the first transform and the second transform to determine whether an error has occurred relating to one or both of the manipulator and the localizer.
    Type: Grant
    Filed: January 11, 2023
    Date of Patent: December 19, 2023
    Assignee: MAKO Surgical Corp.
    Inventor: Michael Dale Dozeman
  • Patent number: 11843378
    Abstract: Embodiments of the present invention relate to an architecture that uses hierarchical statistically multiplexed counters to extend counter life by orders of magnitude. Each level includes statistically multiplexed counters. The statistically multiplexed counters includes P base counters and S subcounters, wherein the S subcounters are dynamically concatenated with the P base counters. When a row overflow in a level occurs, counters in a next level above are used to extend counter life. The hierarchical statistically multiplexed counters can be used with an overflow FIFO to further extend counter life.
    Type: Grant
    Filed: February 2, 2022
    Date of Patent: December 12, 2023
    Assignee: Marvel Asia PTE., LTD.
    Inventors: Weihuang Wang, Gerald Schmidt, Srinath Atluri, Weinan Ma, Shrikant Sundaram Lnu
  • Patent number: 11803482
    Abstract: Process dedicated in-memory translation lookaside buffers (TLBs) (mTLBs) for augmenting a memory management unit (MMU) TLB for translating virtual addresses (VAs) to physical addresses (PA) in a processor-based system is disclosed. In disclosed examples, a dedicated in-memory TLB is supported in system memory for each process so that one process's cached page table entries do not displace another process's cached page table entries. When a process is scheduled to execute in a central processing unit (CPU), the in-memory TLB address stored for such process can be used by page table walker circuit in the CPU MMU to access the dedicated in-memory TLB for executing the process to perform VA to PA translations in the event of a TLB miss to the MMU TLB. If a TLB miss occurs to the in-memory TLB, the page table walker circuit can walk the page table in the MMU.
    Type: Grant
    Filed: January 24, 2022
    Date of Patent: October 31, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Madhavan Thirukkurungudi Venkataraman, Thomas Philip Speier
  • Patent number: 11782874
    Abstract: Providing cache updates in a multi-node system through a service component between a lower level component and a next higher level component by maintaining a ledger storing an incrementing number indicating a present state of the datasets in a cache of the lower level component. The service component receives a data request to the lower level component from the higher level component including an appended last entry number accessed by the higher level component. It determines if the appended last entry number matches a current entry number in the ledger for any requested dataset. No match indicates that some data in the higher level component cache is stale. It then sends updated information for the stale data to the higher level component. The higher level component invalidates its cache entries and updates the appended last entry number to match a current entry number in the ledger.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: October 10, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Sirisha Kaipa, Madhura Srinivasa Raghavan, Neha R. Naik
  • Patent number: 11775216
    Abstract: A system includes a memory device, and a processing device, operatively coupled with the memory device, to perform operations including receiving a media access operation access command to perform a media access operation with respect to a memory location residing on the memory device, determining whether there exists another memory location access at the memory location, in response to determining that another memory location access exists at the memory location, determining whether the media access operation command is a read command, and in response to determining that the media access operation is a read command, servicing the media access operation command from a media buffer. The media buffer maintains data associated with the completed write operation.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: October 3, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Fangfang Zhu, Jiangli Zhu, Juane Li
  • Patent number: 11726975
    Abstract: A system for unloading tables of a database is provided. In some aspects, the system performs operations including determining that a number of accesses to a table occurring within a time period has satisfied an access threshold. The operations may further include identifying, in response to the determining, a first timestamp indicating a most recent access to the table. The operations may further include determining whether a difference between a current timestamp and the first timestamp satisfies a first time threshold. The operations may further include comparing, in response to the difference satisfying the first time threshold, a ratio of the difference and a size of the table to a ratio threshold. The operations may further include unloading, in response to satisfying the ratio threshold, the table. The operations may further include adjusting, based on the feedback, the first time threshold and/or the ratio threshold.
    Type: Grant
    Filed: April 29, 2022
    Date of Patent: August 15, 2023
    Assignee: SAP SE
    Inventors: Klaus Otto Mueller, Thomas Legler
  • Patent number: 11720495
    Abstract: In described examples, a coherent memory system includes a central processing unit (CPU) and first and second level caches. The CPU is arranged to execute program instructions to manipulate data in at least a first or second secure context. Each of the first and second caches stores a secure code for indicating the at least first or second secure contexts by which data for a respective cache line is received. The first and second level caches maintain coherency in response to comparing the secure codes of respective lines of cache and executing a cache coherency operation in response.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: August 8, 2023
    Assignee: Texas Instmments Incorporated
    Inventors: Abhijeet Ashok Chachad, David Matthew Thompson, Naveen Bhoria
  • Patent number: 11675704
    Abstract: In a ray tracer, a cache for streaming workloads groups ray requests for coherent successive bounding volume hierarchy traversal operations by sending common data down an attached data path to all ray requests in the group at the same time or about the same time. Grouping the requests provides good performance with a smaller number of cache lines.
    Type: Grant
    Filed: September 23, 2021
    Date of Patent: June 13, 2023
    Assignee: NVIDIA Corporation
    Inventors: Greg Muthler, Timo Aila, Tero Karras, Samuli Laine, William Parsons Newhall, Jr., Ronald Charles Babich, Jr., John Burgess, Ignacio Llamas
  • Patent number: 11650957
    Abstract: Provided are a computer program product, system, and method receiving at a cache node notification of changes to files in a source file system served from a cache file system at the cache node. A cache file system is established at the cache node as a local share of a source file system at the source node. The source node establishes a local share of the cache file system at the cache node. Notification is received, from the source node, that the source node modified a source control file for a source file at the source node. In response to receiving the notification, a cache control file, for a cached file in the cache file system, is updated to indicate the source file at the source node is modified. A request is sent to the source node to obtain data for the source file indicated as modified.
    Type: Grant
    Filed: June 1, 2021
    Date of Patent: May 16, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Venkateswara Rao Puvvada, Karrthik K G, Saket Kumar, Ravi Kumar Komanduri
  • Patent number: 11650934
    Abstract: One example method includes a cache eviction operation. Entries in a cache are maintained in an entry list that includes a recent list and a frequent list. When an eviction operation is initiated or triggered, timestamps of last access for the entries are adjusted by corresponding adjustment values. Candidates for eviction are identified based on the adjusted timestamps of last access. At least some of the candidates are evicted from the cache.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: May 16, 2023
    Assignee: DELL PRODUCTS L.P.
    Inventors: Keyur B. Desai, Xiaobing Zhang
  • Patent number: 11625587
    Abstract: An artificial intelligence integrated circuit is provided. The artificial intelligence integrated circuit includes a flash memory, a dynamic random access memory (DRAM), and a memory controller. The flash memory is configured to store a logical-to-physical mapping (L2P) table that is divided into a plurality of group-mapping (G2P) tables. The memory controller includes a first processing core and a second processing core. The first processing core receives a host access command from a host. When a specific G2P table corresponding to a specific logical address in the host access command is not stored in the DRAM, the first processing core determines whether the second processing core has loaded the specific G2P table from the flash memory to the DRAM according to the values in a first column in a first bit map and in a second column of a second bit map.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: April 11, 2023
    Assignee: GLENFLY TECHNOLOGY CO., LTD.
    Inventor: Deming Gu
  • Patent number: 11621042
    Abstract: Disclosed in some examples are methods, systems, memory devices, and machine-readable mediums which increase read throughput by introducing a delay prior to issuing a command to increase the chances that read commands can be executed in parallel. Upon receipt of a read command, if there are no other read commands in the command queue for a given portion (e.g., plane or plane group) of the die, the controller can delay issuing the read command for a delay period using a timer. If, during the delay period, an eligible read command is received, the delayed command and the newly received command are both issued in parallel using a multi-plane read. If no eligible read command is received during the delay period, the read command is issued after the delay period expires.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: April 4, 2023
    Assignee: Micron Technology, Inc.
    Inventor: David Aaron Palmer
  • Patent number: 11602401
    Abstract: Systems and methods for operating a robotic surgical system are provided. The system includes a surgical tool, a manipulator comprising links for controlling the tool, a navigation system includes a tracker and a localizer to monitor a state of the tracker. Controller(s) determine a relationship between one or more components of the manipulator and one or more components of the navigation system by utilizing kinematic measurement data from the manipulator and navigation data from the navigation system. The controller(s) utilize the relationship to determine whether an error has occurred relating to at least one of the manipulator and the navigation system. The error is at least one of undesired movement of the manipulator, undesired movement of the localizer, failure of any one or more components of the manipulator or the localizer, and/or improper calibration data.
    Type: Grant
    Filed: May 24, 2021
    Date of Patent: March 14, 2023
    Assignee: Mako Surgical Corp.
    Inventor: Michael Dale Dozeman
  • Patent number: 11599469
    Abstract: A computer system includes a first core including a first local cache and a second core including a second local cache. The first core and the second core are coupled through a remote link. A shared cache coupled to the first core and to the second core. The shared cache includes an ownership table that includes a plurality of entries indicating if a cache line is stored solely in the first local cache or solely in the second local cache. The remote link includes a first link between the first core and the shared cache and a second link between the second core and the shared cache.
    Type: Grant
    Filed: January 7, 2022
    Date of Patent: March 7, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Louis-Philippe Hamelin, Chang Hoon Lee, John Edward Vincent, Olivier D'Arcy, Guy-Armand Kamendje Tchokobou
  • Patent number: 11593108
    Abstract: Aspects are provided for sharing instruction cache footprint between multiple threads. A set/way pointer to an instruction cache line is derived from a system memory address associated with an instruction fetch from a memory page. It is determined that the instruction cache line is shareable between a first thread and a second thread. An alias table entry is created indicating that other instruction cache lines associated with the memory page are also shareable between threads. Another instruction fetch is received from another thread requesting an instruction from another system memory address associated with the memory page. A further set/way pointer to another instruction cache line is derived from the other system memory address. It is determined that the other instruction cache line is shareable based on the alias table entry.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: February 28, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Sheldon Bernard Levenstein, Nicholas R. Orzol, Christian Gerhard Zoellin, David Campbell
  • Patent number: 11586382
    Abstract: A data processing system includes first memory system including a first nonvolatile memory device; a second memory system including a second nonvolatile memory device; and a master system including a third nonvolatile memory device. The master system classifies any one of the first memory system and the second memory system as a first slave system and the other as a second slave system depending on a predetermined reference, wherein the master system is coupled to a host, and includes a write buffer for temporarily storing a plurality of write data, and wherein the master system classifies the write data, into first write data grouped into a transaction and second write data which are not grouped into the transaction, stores the second write data in the third nonvolatile memory device, and stores the first write data in the first nonvolatile memory device or the second nonvolatile memory device.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: February 21, 2023
    Assignee: SK hynix Inc.
    Inventor: Hae-Gi Choi
  • Patent number: 11567874
    Abstract: An apparatus includes a CPU core, a first memory cache with a first line size, and a second memory cache having a second line size larger than the first line size. Each line of the second memory cache includes an upper half and a lower half. A memory controller subsystem is coupled to the CPU core and to the first and second memory caches. Upon a miss in the first memory cache for a first target address, the memory controller subsystem determines that the first target address resulting in the miss maps to the lower half of a line in the second memory cache, retrieves the entire line from the second memory cache, and returns the entire line from the second memory cache to the first memory cache.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: January 31, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Bipin Prasad Heremagalur Ramaprasad, David Matthew Thompson, Abhijeet Ashok Chachad, Hung Ong
  • Patent number: 11561900
    Abstract: A data processing system includes system memory and a plurality of processor cores each supported by a respective one of a plurality of vertical cache hierarchies. A first vertical cache hierarchy records information indicating communication of cache lines between the first vertical cache hierarchy and others of the plurality of vertical cache hierarchies. Based on selection of a victim cache line for eviction, the first vertical cache hierarchy determines, based on the recorded information, whether to perform a lateral castout of the victim cache line to another of the plurality of vertical cache hierarchies rather than to system memory and selects, based on the recorded information, a second vertical cache hierarchy among the plurality of vertical cache hierarchies as a recipient of the victim cache line via a lateral castout. Based on the determination, the first vertical cache hierarchy performs a castout of the victim cache line.
    Type: Grant
    Filed: August 4, 2021
    Date of Patent: January 24, 2023
    Assignee: International Business Machines Corporation
    Inventors: Bernard C. Drerup, Guy L. Guthrie, Jeffrey A. Stuecheli, Alexander Michael Taft, Derek E. Williams
  • Patent number: 11556474
    Abstract: Embodiments are provided for an integrated semi-inclusive hierarchical metadata predictor. A hit in a second-level structure is determined, the hit being associated with a line of metadata in the second-level structure. Responsive to determining that a victim line of metadata in a first-level structure meets at least one condition, the victim line of metadata is stored in the second-level structure. The line of metadata from the second-level structure is stored in a first-level structure to be utilized to facilitate performance of a processor, the line of metadata from the second-level structure including entries for a plurality of instructions.
    Type: Grant
    Filed: August 19, 2021
    Date of Patent: January 17, 2023
    Assignee: International Business Machines Corporation
    Inventors: James Bonanno, Adam Benjamin Collura, Edward Thomas Malley, Brian Robert Prasky
  • Patent number: 11520701
    Abstract: Methods and systems associated with caches are disclosed. One disclosed system includes at least one memory storing at least two data structures. The at least two data structures include a first data structure and a second data structure. The system also includes at least two caches with a first cache which caches the first data structure and a second cache which caches the second data structure. The system also includes a controller communicatively coupled to the at least two caches. The controller separately configures the first cache based on the first data structure and the second cache based on the second data structure. The system also comprises at least one processor communicatively coupled to the at least two caches. The processor accesses each of the at least two data structures using the at least two caches and during the execution of a complex computation.
    Type: Grant
    Filed: April 2, 2021
    Date of Patent: December 6, 2022
    Assignee: Tenstorrent Inc.
    Inventors: Ljubisa Bajic, Davor Capalija, Ivan Matosevic, Alex Cejkov
  • Patent number: 11520585
    Abstract: In at least one embodiment, a processing unit includes a processor core and a vertical cache hierarchy including at least a store-through upper-level cache and a store-in lower-level cache. The upper-level cache includes a data array and an effective address (EA) directory. The processor core includes an execution unit, an address translation unit, and a prefetch unit configured to initiate allocation of a directory entry in the EA directory for a store target EA without prefetching a cache line of data into the corresponding data entry in the data array. The processor core caches in the directory entry an EA-to-RA address translation information for the store target EA, such that a subsequent demand store access that hits in the directory entry can avoid a performance penalty associated with address translation by the translation unit.
    Type: Grant
    Filed: April 1, 2021
    Date of Patent: December 6, 2022
    Assignee: International Business Machines Corporation
    Inventors: Bryan Lloyd, Brian W. Thompto, George W. Rohrbaugh, III, Mohit Karve, Vivek Britto
  • Patent number: 11474944
    Abstract: This invention involves a cache system in a digital data processing apparatus including: a central processing unit core; a level one instruction cache; and a level two cache. The cache lines in the second level cache are twice the size of the cache lines in the first level instruction cache. The central processing unit core requests additional program instructions when needed via a request address. Upon a miss in the level one instruction cache that causes a hit in the upper half of a level two cache line, the level two cache supplies the upper half level cache line to the level one instruction cache. On a following level two cache memory cycle, the level two cache supplies the lower half of the cache line to the level one instruction cache. This cache technique thus prefetches the lower half level two cache line employing fewer resources than an ordinary prefetch.
    Type: Grant
    Filed: January 19, 2021
    Date of Patent: October 18, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Oluleye Olorode, Ramakrishnan Venkatasubramanian, Hung Ong
  • Patent number: 11467937
    Abstract: An electronic device includes a cache with a cache controller and a cache memory. The electronic device also includes a cache policy manager. The cache policy manager causes the cache controller to use two or more cache policies for cache operations in each of multiple test regions in the cache memory, with different configuration values for the two or more cache policies being used in each test region. The cache policy manager selects a selected configuration value for at least one cache policy of the two or more cache policies based on performance metrics for cache operations while using the different configuration values for the two or more cache policies in the test regions. The cache policy manager causes the cache controller to use the selected configuration value when using the at least one cache policy for cache operations in a main region of the cache memory.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: October 11, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: John Kelley, Paul Moyer
  • Patent number: 11461011
    Abstract: The present disclosure techniques for implementing an apparatus, which includes processing circuitry that performs an operation based a target data block, a processor-side cache that implements a first cache line, memory-side cache that implements a second cache line having line width greater than the first cache line, and a memory array. The apparatus includes one or more memory controllers that, when the target data block results in a cache miss, determine a row address that identifies a memory cell row as storing the target data block, instruct the memory array to successively output multiple data blocks from the memory cell row to enable the memory-side cache to store each of the multiple of data blocks in the second cache line, and instruct the memory-side cache to output the target data block to a coherency bus to enable the processing circuitry to perform the operation based on the target data block.
    Type: Grant
    Filed: October 29, 2020
    Date of Patent: October 4, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Richard C. Murphy, Anton Korzh, Stephen S. Pawlowski
  • Patent number: 11455253
    Abstract: An apparatus comprises first-level and second-level set-associative caches each comprising the same number of sets of cache entries. Indexing circuitry generates, based on a lookup address, a set index identifying which set of the first-level set-associative cache or the second-level set-associative cache is a selected set of cache entries to be looked up for information associated with the lookup address. The indexing circuitry generates the set index using an indexing scheme which maps the lookup address to the same set index for both the first-level set-associative cache and the second-level set-associative cache. This can make migration of cached information between the cache levels more efficient, which can be particularly useful for caches with high access frequency, such as branch target buffers for a branch predictor.
    Type: Grant
    Filed: October 1, 2020
    Date of Patent: September 27, 2022
    Assignee: Arm Limited
    Inventors: Yasuo Ishii, James David Dundas, Chang Joo Lee, Muhammad Umar Farooq
  • Patent number: 11442851
    Abstract: A processing-in-memory includes: a memory; a register configured to store offset information; and an internal processor configured to: receive an instruction and a reference physical address of the memory from a memory controller, determine an offset physical address of the memory based on the offset information, determine a target physical address of the memory based on the reference physical address and the offset physical address, and perform the instruction by accessing the target physical address.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: September 13, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hosang Yoon, Seungwon Lee
  • Patent number: 11366765
    Abstract: In an approach to optimizing metadata management to boost overall system performance, a cache for a storage system is initialized. Responsive to receiving a cache hit from a host cache during a host I/O operation, a first metadata of a plurality of metadata is transferred to a storage cache, where the first metadata is associated with a user data from the host I/O operation, and further wherein the first metadata is deleted from the host cache. Responsive to determining that the storage cache is full, a second metadata of the plurality of metadata is destaged from the storage cache, where the second metadata is destaged by moving the second metadata to the host cache, and further wherein the second metadata is deleted from the storage cache.
    Type: Grant
    Filed: April 21, 2021
    Date of Patent: June 21, 2022
    Assignee: International Business Machines Corporation
    Inventors: Qiang Xie, Hui Zhang, Hong Qing Zhou, Yongjie Gong, Ping Hp He
  • Patent number: 11354243
    Abstract: A processing unit for a data processing system includes a processor core that issues memory access requests and a cache memory coupled to the processor core. The cache memory includes a reservation circuit that tracks reservations established by the processor core via load-reserve requests and a plurality of read-claim (RC) state machines for servicing memory access requests of the processor core. The cache memory, responsive to receipt from the processor core of a store-conditional request specifying a store target address, allocates an RC state machine among the plurality of RC state machines to process the store-conditional request and transfers responsibility for tracking a reservation for the store target address from the reservation circuit to the RC state machine.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: June 7, 2022
    Assignee: International Business Machines Corporation
    Inventors: Derek E. Williams, Guy L. Guthrie, Hugh Shen, Sanjeev Ghai, Luke Murray
  • Patent number: 11321201
    Abstract: Provided are a computer program product, system, and method for using a mirroring cache list to mirror modified tracks for a primary storage in a cache to a secondary storage. Indication is made of a modified track for the primary storage stored in the cache in a mirroring cache list. The mirroring cache list is processed to select modified tracks in the cache to transfer to the secondary storage that have not yet been transferred. The selected modified tracks are transferred to the secondary storage. Indication of a modified track is removed from the mirroring cache list in response to demoting the modified track from the cache.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: May 3, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh Mohan Gupta, Kevin J. Ash, Kyler A. Anderson, Matthew J. Kalos
  • Patent number: 11307836
    Abstract: Disclosed are a general machine learning model generation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring task parameters of a machine learning task (S1201); performing classification processing on the task parameters to obtain task instructions and model parameters (S1202); aggregating the task instructions and the model parameters according to a data type to obtain stack data and heap data (S1203); and integrating the stack data and the heap data to obtain a general machine learning model (S1204). By means of the method, compiled results of a corresponding general model in the running of an algorithm can be directly executed, which avoids repetitive compilation, thus greatly improving the efficiency of machine learning algorithm implementation and shortening the time from compilation to obtaining execution results.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: April 19, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Weijian Du, Linyang Wu, Xunyu Chen
  • Patent number: 10685718
    Abstract: Disclosed in some examples are methods, systems, memory devices, and machine-readable mediums which increase read throughput by introducing a delay prior to issuing a command to increase the chances that read commands can be executed in parallel. Upon receipt of a read command, if there are no other read commands in the command queue for a given portion (e.g., plane or plane group) of the die, the controller can delay issuing the read command for a delay period using a timer. If, during the delay period, an eligible read command is received, the delayed command and the newly received command are both issued in parallel using a multi-plane read. If no eligible read command is received during the delay period, the read command is issued after the delay period expires.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: June 16, 2020
    Assignee: Micron Technnology, Inc.
    Inventor: David Aaron Palmer
  • Patent number: 10545820
    Abstract: A memory device, a memory system, and a method of operating the same. The memory device includes a memory cell array including a plurality of memory cells and a write command determination unit (WCDU) that determines whether a write command input to the memory device is (to be) accompanied a masking signal. The WCDU produces a first control signal if the input write command is (to be) accompanied by a masking signal. A data masking unit combines a portion of read data read from the memory cell array with a corresponding portion of input write data corresponding to the write command and generates modulation data in response to the first control signal. An error correction code (ECC) engine generates parity of the modulation data.
    Type: Grant
    Filed: May 13, 2016
    Date of Patent: January 28, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Jong-Wook Park
  • Patent number: 10037247
    Abstract: The memory system may include a memory device including a plurality of sub-memory devices coupled to a channel; and a controller suitable for controlling the memory device to store a first data into a selected sub-memory device and at least one idle sub-memory device among the sub-memory devices during a first program operation to a selected sub-memory device among the sub-memory devices with the first data with a first data; and to perform a second program operation to the selected sub-memory device with the first data stored in the idle sub-memory device when the first program operation to the selected sub-memory device fails.
    Type: Grant
    Filed: February 19, 2016
    Date of Patent: July 31, 2018
    Assignee: SK Hynix Inc.
    Inventors: Sung Yeob Cho, Dong Yeob Chun
  • Patent number: 9672041
    Abstract: A method for compressing instruction is provided, which includes the following steps. Analyze a program code to be executed by a processor to find one or more instruction groups in the program code according to a preset condition. Each of the instruction groups includes one or more instructions in sequential order. Sort the one or more instruction groups according to a cost function of each of the one or more instruction groups. Put the first X of the sorted one or more instruction groups into an instruction table. X is a value determined according to the cost function. Replace each of the one or more instruction groups in the program code that are put into the instruction table with a corresponding execution-on-instruction-table (EIT) instruction. The EIT instruction has a parameter referring to the corresponding instruction group in the instruction table.
    Type: Grant
    Filed: August 1, 2013
    Date of Patent: June 6, 2017
    Assignee: ANDES TECHNOLOGY CORPORATION
    Inventors: Wei-Hao Chiao, Hong-Men Su, Haw-Luen Tsai
  • Patent number: 9645936
    Abstract: Systems, methods, and other embodiments associated with providing limited writing in a memory hierarchy are described. According to one embodiment, an apparatus includes a plurality of memory devices sequentially configured in a memory hierarchy. The apparatus also includes a first logic configured to execute a first command to initiate a sequence of write operations to write data to the plurality of memory devices in the memory hierarchy via propagation of the data sequentially through the memory hierarchy. The apparatus further includes a second logic configured to execute a second command to initiate an interruption of the sequence of write operations by indicating to at least one memory device of the plurality of memory devices to terminate the propagation of the data through the memory hierarchy prior to completing the propagation.
    Type: Grant
    Filed: March 26, 2015
    Date of Patent: May 9, 2017
    Assignee: MARVELL INTERNATIONAL LTD.
    Inventors: Kim Schuttenberg, Richard Bryant
  • Patent number: 9032157
    Abstract: Disclosed is a computer system (100) comprising a processor unit (110) adapted to run a virtual machine in a first operating mode; a cache (120) accessible to the processor unit, said cache comprising a plurality of cache rows (1210), each cache row comprising a cache line (1214) and an image modification flag (1217) indicating a modification of said cache line caused by the running of the virtual machine; and a memory (140) accessible to the cache controller for storing an image of said virtual machine; wherein the processor unit comprises a replication manager adapted to define a log (200) in the memory prior to running the virtual machine in said first operating mode; and said cache further includes a cache controller (122) adapted to periodically check said image modification flags; write only the memory address of the flagged cache lines in the defined log and subsequently clear the image modification flags.
    Type: Grant
    Filed: December 11, 2012
    Date of Patent: May 12, 2015
    Assignee: International Business Machines Corporation
    Inventors: Sanjeev Ghai, Guy L. Guthrie, Geraint North, William J. Starke, Phillip G. Williams
  • Patent number: 8972663
    Abstract: A method for cache coherence, including: broadcasting, by a requester cache (RC) over a partially-ordered request network (RN), a peer-to-peer (P2P) request for a cacheline to a plurality of slave caches; receiving, by the RC and over the RN while the P2P request is pending, a forwarded request for the cacheline from a gateway; receiving, by the RC and after receiving the forwarded request, a plurality of responses to the P2P request from the plurality of slave caches; setting an intra-processor state of the cacheline in the RC, wherein the intra-processor state also specifies an inter-processor state of the cacheline; and issuing, by the RC, a response to the forwarded request after setting the intra-processor state and after the P2P request is complete; and modifying, by the RC, the intra-processor state in response to issuing the response to the forwarded request.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: March 3, 2015
    Assignee: Oracle International Corporation
    Inventors: Paul N. Loewenstein, Stephen E. Phillips, David Richard Smentek, Connie Wai Mun Cheung, Serena Wing Yee Leung, Damien Walker, Ramaswamy Sivaramakrishnan
  • Patent number: 8966178
    Abstract: Provided are a computer program product, system, and method for managing data in a cache system comprising a first cache, a second cache, and a storage system. A determination is made of tracks stored in the storage system to demote from the first cache. A first stride is formed including the determined tracks to demote. A determination is made of a second stride in the second cache in which to include the tracks in the first stride. The tracks from the first stride are added to the second stride in the second cache. A determination is made of tracks in strides in the second cache to demote from the second cache. The determined tracks to demote from the second cache are demoted.
    Type: Grant
    Filed: January 17, 2012
    Date of Patent: February 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Kevin J. Ash, Michael T. Benhase, Lokesh M. Gupta, Matthew J. Kalos, Karl A. Nielsen
  • Patent number: 8959279
    Abstract: Provided are a computer program product, system, and method for managing data in a cache system comprising a first cache, a second cache, and a storage system. A determination is made of tracks stored in the storage system to demote from the first cache. A first stride is formed including the determined tracks to demote. A determination is made of a second stride in the second cache in which to include the tracks in the first stride. The tracks from the first stride are added to the second stride in the second cache. A determination is made of tracks in strides in the second cache to demote from the second cache. The determined tracks to demote from the second cache are demoted.
    Type: Grant
    Filed: May 4, 2012
    Date of Patent: February 17, 2015
    Assignee: International Business Machines Corporation
    Inventors: Kevin J. Ash, Michael T. Benhase, Lokesh M. Gupta, Matthew J. Kalos, Karl A. Nielsen
  • Patent number: 8943287
    Abstract: A multi-core processor system includes a number of cores, a memory system, and a common access bus. Each core includes a core processor; a dedicated core cache operatively connected to the core processor; and, a core processor rate limiter operatively connected to the dedicated core cache. The memory system includes physical memory; a memory controller connected to the physical memory; and, a dedicated memory cache connected to the memory controller. The common access bus interconnects the cores and the memory system. The core processor rate limiters are configured to constrain the rate at which data is accessed by each respective core processor from the memory system so that each core processor memory access is capable of being limited to an expected value.
    Type: Grant
    Filed: July 17, 2012
    Date of Patent: January 27, 2015
    Assignee: Rockwell Collins, Inc.
    Inventors: David A. Miller, David C. Matthews
  • Patent number: 8924623
    Abstract: A method for managing multi-layered data structures in a pipelined memory architecture, comprising the steps of: —providing a multi-level data structure where each level corresponds to a memory access; —storing each level in a separate memory block with respect to the other levels. In this way, a more efficient usage of memory is achieved.
    Type: Grant
    Filed: July 28, 2010
    Date of Patent: December 30, 2014
    Assignee: Oricane AB
    Inventor: Mikael Sundström
  • Patent number: 8909866
    Abstract: A processor transfers prefetch requests from their targeted cache to another cache in a memory hierarchy based on a fullness of a miss address buffer (MAB) or based on confidence levels of the prefetch requests. Each cache in the memory hierarchy is assigned a number of slots at the MAB. In response to determining the fullness of the slots assigned to a cache is above a threshold when a prefetch request to the cache is received, the processor transfers the prefetch request to the next lower level cache in the memory hierarchy. In response, the data targeted by the access request is prefetched to the next lower level cache in the memory hierarchy, and is therefore available for subsequent provision to the cache. In addition, the processor can transfer a prefetch request to lower level caches based on a confidence level of a prefetch request.
    Type: Grant
    Filed: November 6, 2012
    Date of Patent: December 9, 2014
    Assignee: Advanced Micro Devices, Inc.
    Inventors: John Kalamatianos, Ravindra Nath Bhargava, Ramkumar Jayaseelan
  • Patent number: 8856444
    Abstract: Data caching for use in a computer system including a lower cache memory and a higher cache memory. The higher cache memory receives a fetch request. It is then determined by the higher cache memory the state of the entry to be replaced next. If the state of the entry to be replaced next indicates that the entry is exclusively owned or modified, the state of the entry to be replaced next is changed such that a following cache access is processed at a higher speed compared to an access processed if the state would stay unchanged.
    Type: Grant
    Filed: April 28, 2012
    Date of Patent: October 7, 2014
    Assignee: International Business Machines Corporation
    Inventors: Christian Habermann, Martin Recktenwald, Hans-Werner Tast, Ralf Winkelmann