Cache With Interleaved Addressing (epo) Patents (Class 711/E12.047)
  • Patent number: 11914518
    Abstract: A cache is provided having a plurality of entries for storing data. In response to a given access request, lookup circuitry performs a lookup operation in the cache to determine whether one of the entries in the cache is allocated to store data associated with the memory address indicated by the given access request, with a hit indication or a miss indication being generated dependent on the outcome of that lookup operation. During a single lookup period, the lookup circuitry is configured to perform lookup operations in parallel for up to N access requests. In addition, allocation circuitry is provided that is able to determine, during the single lookup period, at least N candidate entries for allocation from amongst the plurality of entries, and to cause one of the candidate entries to be allocated for each of the up to N access requests for which the lookup circuitry generates a miss indication.
    Type: Grant
    Filed: September 21, 2022
    Date of Patent: February 27, 2024
    Assignee: Arm Limited
    Inventors: Yoav Asher Levy, Elad Kadosh, Jakob Axel Fries, Lior-Levi Bandal
  • Patent number: 11855889
    Abstract: An information processing method includes: receiving a request for search with respect to a memory circuit that searches for information stored in a memory, issued from a requester; storing order information in which the request is issued; determining whether or not to make the memory circuit perform search on the basis of a predetermined requirement not to make the memory circuit perform search and the request; creating a predetermined response in a case where the memory circuit is not made to perform search; and returning a response of the memory circuit and the predetermined response to the requester in the issued order on the basis of the order information.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: December 26, 2023
    Assignee: FUJITSU LIMITED
    Inventor: Takashi Shimizu
  • Patent number: 11615030
    Abstract: According to one embodiment, a cache memory system includes a cache memory and a cache controller. The cache memory can store first data to be read or written by a processor. The cache controller is configured to execute a refresh. The refresh includes reading the first data stored in the cache memory and writing the read first data to the cache memory. When executing the refresh, the cache controller is configured to exchange the first data stored in a first area of the cache memory for second data stored in a second area of the cache memory.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: March 28, 2023
    Assignee: Kioxia Corporation
    Inventor: Shohei Onishi
  • Patent number: 11573899
    Abstract: Low latency in a non-uniform cache access (“NUCA”) cache in a computing environment is provided. A first compressed cache line is interleaved with a second compressed cache line into a single cache line of the NUCA cache, where data of the first compressed cache line is stored in one or more even sectors in the single cache line and stored in zero or more odd sectors in the single cache line after the data fills the one or more even sectors, and data of the second compressed cache line is stored in the one or more odd sectors in the single cache line and stored in zero or more even sectors in the single cache line after the data fills the one or more odd sectors.
    Type: Grant
    Filed: October 21, 2021
    Date of Patent: February 7, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bulent Abali, Alper Buyuktosunoglu, Brian Robert Prasky, Deanna Postles Dunn Berger
  • Patent number: 11456034
    Abstract: Methods, systems, and devices for fully associative cache management are described. A memory subsystem may receive an access command for storing a first data word in a storage component associated with an address space. The memory subsystem may include a fully associative cache for storing the data words associated with the storage component. The memory subsystem may determine an address within the cache to store the first data word. For example, the memory subsystem may determine an address of the cache indicated by an address pointer (e.g., based on the order of the addresses) and determine a quantity of accesses associated with the data word stored in that cache address. Based on the indicated cache address and the quantity of accesses, the memory subsystem may store the first data word in the indicated cache address or a second cache address sequential to the indicated cache address.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: September 27, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Joseph T. Pawlowski
  • Patent number: 8127082
    Abstract: A method and apparatus for allowing multiple devices access to an address translation cache while cache maintenance operations are occurring at the same time. By interleaving the commands requiring address translation with maintenance operations that may normally take many cycles, address translation requests may have faster access to the address translation cache than if maintenance operations were allowed to stall commands requiring address translations until the maintenance operation was completed.
    Type: Grant
    Filed: February 1, 2006
    Date of Patent: February 28, 2012
    Assignee: International Business Machines Corporation
    Inventors: Chad B. McBride, Andrew H. Wottreng, John D. Irish
  • Patent number: 7996646
    Abstract: In one embodiment, an apparatus comprises a queue comprising a plurality of entries and a control unit coupled to the queue. The control unit is configured to allocate a first queue entry to a store memory operation, and is configured to write a first even offset, a first even mask, a first odd offset, and a first odd mask corresponding to the store memory operation to the first entry. A group of contiguous memory locations are logically divided into alternately-addressed even and odd byte ranges. A given store memory operation writes at most one even byte range and one adjacent odd byte range. The first even offset identifies a first even byte range that is potentially written by the store memory operation, and the first odd offset identifies a first odd byte range that is potentially written by the store memory operation.
    Type: Grant
    Filed: March 10, 2010
    Date of Patent: August 9, 2011
    Assignee: Apple Inc.
    Inventors: Tse-yu Yeh, Daniel C. Murray, Po-Yung Chang, Anup S. Mehta
  • Patent number: 7793048
    Abstract: A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses.
    Type: Grant
    Filed: September 9, 2008
    Date of Patent: September 7, 2010
    Assignee: International Business Machines Corporation
    Inventors: Vicente Enrique Chung, Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli
  • Patent number: 7783834
    Abstract: A cache memory logically associates a cache line with at least two cache sectors of a cache array wherein different sectors have different output latencies and, for a load hit, selectively enables the cache sectors based on their latency to output the cache line over successive clock cycles. Larger wires having a higher transmission speed are preferably used to output the cache line corresponding to the requested memory block. In the illustrative embodiment the cache is arranged with rows and columns of the cache sectors, and a given cache line is spread across sectors in different columns, with at least one portion of the given cache line being located in a first column having a first latency, and another portion of the given cache line being located in a second column having a second latency greater than the first latency.
    Type: Grant
    Filed: November 29, 2007
    Date of Patent: August 24, 2010
    Assignee: International Business Machines Corporation
    Inventors: Leo James Clark, Guy Lynn Guthrie, Kirk Samuel Livingston, William John Starke
  • Publication number: 20100191893
    Abstract: A method and system for accessing a single port multi-way cache includes an address multiplexer that simultaneously addresses a set of data and a set of program instructions in the multi-way cache. Duplicate output way multiplexers respectively select data and program instructions read from the cache responsive to the address multiplexer.
    Type: Application
    Filed: April 14, 2010
    Publication date: July 29, 2010
    Applicant: Infineon Technologies AG
    Inventor: Klaus Oberlaender
  • Patent number: 7721066
    Abstract: In one embodiment, an apparatus comprises a queue comprising a plurality of entries and a control unit coupled to the queue. The control unit is configured to allocate a first queue entry to a store memory operation, and is configured to write a first even offset, a first even mask, a first odd offset, and a first odd mask corresponding to the store memory operation to the first entry. A group of contiguous memory locations are logically divided into alternately-addressed even and odd byte ranges. A given store memory operation writes at most one even byte range and one adjacent odd byte range. The first even offset identifies a first even byte range that is potentially written by the store memory operation, and the first odd offset identifies a first odd byte range that is potentially written by the store memory operation.
    Type: Grant
    Filed: June 5, 2007
    Date of Patent: May 18, 2010
    Assignee: Apple Inc.
    Inventors: Tse-yu Yeh, Daniel C. Murray, Po-Yung Chang, Anup S. Mehta
  • Publication number: 20080120466
    Abstract: A method and system for accessing a single port multi-way cache includes an address multiplexer that simultaneously addresses a set of data and a set of program instructions in the multi-way cache. Duplicate output way multiplexers respectively select data and program instructions read from the cache responsive to the address multiplexer.
    Type: Application
    Filed: November 20, 2006
    Publication date: May 22, 2008
    Inventor: Klaus Oberlaender