Data Cache Being Concurrently Physically Addressed (epo) Patents (Class 711/E12.063)
  • Patent number: 11954356
    Abstract: Apparatus, method, and system for efficiently identifying and tracking cold memory pages are disclosed. The apparatus in one embodiment includes one or more processor cores to access memory pages stored in the memory by issuing access requests to the memory and a page index bitmap to track accesses made by the one or more processor cores to the memory pages. The tracked accesses are usable to identify infrequently-accessed memory pages, where the infrequently-accessed memory pages are removed from the memory and stored in a secondary storage.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: April 9, 2024
    Assignee: Intel Corporation
    Inventors: Qiuxu Zhuo, Anthony Luck
  • Patent number: 11797665
    Abstract: A processing system includes a branch prediction structure storing information used to predict the outcome of a branch instruction. The processing system also includes a register storing a first identifier of a first process in response to the processing system changing from a first mode that allows the first process to modify the branch prediction structure to a second mode in which the branch prediction structure is not modifiable. The processing system further includes a processor core that selectively flushes the branch prediction structure based on a comparison of a second identifier of a second process and the first identifier stored in the register. The comparison is performed in response to the second process causing a change from the second mode to the first mode.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: October 24, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: David Kaplan, Marius Evers
  • Patent number: 11625479
    Abstract: A data cache memory mitigates side channel attacks in a processor that comprises the data cache memory and that includes a translation context (TC). A first input receives a virtual memory address. A second input receives the TC. Control logic, with each allocation of an entry of the data cache memory, uses the received virtual memory address and the received TC to perform the allocation of the entry. The control logic also, with each access of the data cache memory, uses the received virtual memory address and the received TC in a correct determination of whether the access hits in the data cache memory. The TC includes a virtual machine identifier (VMID), or a privilege mode (PM) or a translation regime (TR), or both the VMID and the PM or the TR.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: April 11, 2023
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 11593109
    Abstract: Aspects are provided for sharing instruction cache footprint between multiple threads using instruction cache set/way pointers and a tracking table. The tracking table is built up over time for shared pages, even when the instruction cache has no access to real addresses or translation information. A set/way pointer to an instruction cache line is derived from the system memory address associated with a first thread's instruction fetch. The set/way pointer is stored as a surrogate for the system memory address in both an instruction cache directory (IDIR) and a tracking table. Another set/way pointer to an instruction cache line is derived from the system memory address associated with a second thread's instruction fetch. A match is detected between the set/way pointer and the other set/way pointer. The instruction cache directory is updated to indicate that the instruction cache line is shared between multiple threads.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: February 28, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Sheldon Bernard Levenstein, Nicholas R. Orzol, Christian Gerhard Zoellin, David Campbell
  • Patent number: 11580031
    Abstract: Systems, methods, and apparatuses relating to hardware for split data translation lookaside buffers. In one embodiment, a processor includes a decode circuit to decode instructions into decoded instructions, an execution circuit to execute the decoded instructions, and a memory circuit comprising a load data translation lookaside buffer circuit and a store data translation lookaside buffer circuit separate and distinct from the load data translation lookaside buffer circuit, wherein the memory circuit sends a memory access request of the instructions to the load data translation lookaside buffer circuit when the memory access request is a load data request and to the store data translation lookaside buffer circuit when the memory access request is a store data request to determine a physical address for a virtual address of the memory access request.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: February 14, 2023
    Assignee: Intel Corporation
    Inventors: Stanislav Shwartsman, Igor Yanover, Assaf Zaltsman, Ron Rais
  • Patent number: 11461226
    Abstract: A memory controller having improved reliability and performance controls an operation of a memory device. The memory controller includes a first core configured to receive requests from a host, each request received with a corresponding first logical address associated with data requested from the host and having a first size, and to perform a logical address processing operation of converting the first logical address into a second logical address having a second size different from the first size; and a second core configured to convert the second logical address into a physical address to or from which the data is to be written or read, the physical address representing a position of a memory cell included in the memory device.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: October 4, 2022
    Assignee: SK hynix Inc.
    Inventor: Seung Won Yang
  • Patent number: 8775510
    Abstract: The invention provides, in one aspect, an improved system for data access comprising a file server that is coupled to a client device or application executing thereon via one or more networks. The server comprises static storage that is organized in one or more directories, each containing, zero, one or more files. The server also comprises a file system operable, in cooperation with a file system on the client device, to provide authorized applications executing on the client device access to those directories and/or files. Fast file server (FFS) software or other functionality executing on or in connection with the server responds to requests received from the client by transferring requested data to the client device over multiple network pathways. That data can comprise, for example, directory trees, files (or portions thereof), and so forth.
    Type: Grant
    Filed: January 31, 2013
    Date of Patent: July 8, 2014
    Assignee: PME IP Australia Pty Ltd
    Inventors: Malte Westerhoff, Detlev Stalling
  • Patent number: 7930514
    Abstract: A method, system, and computer program product for implementing a dual-addressable cache is provided. The method includes adding fields for indirect indices to each congruence class provided in a cache directory. The cache directory is indexed by primary addresses. In response to a request for a primary address based upon a known secondary address corresponding to the primary address, the method also includes generating an index for the secondary address, and inserting or updating one of the indirect indices into one of the fields for a congruence class relating to the secondary address. The indirect index is assigned a value of a virtual index corresponding to the primary address. The method further includes searching congruence classes of each of the indirect indices for the secondary address.
    Type: Grant
    Filed: February 9, 2005
    Date of Patent: April 19, 2011
    Assignee: International Business Machines Corporation
    Inventors: Norbert Hagspiel, Erwin Pfeffer, Bruce A. Wagar
  • Patent number: 7900020
    Abstract: The application describes a data processor operable to process data, and comprising: a cache in which a storage location of a data item within said cache is identified by an address, said cache comprising a plurality of storage locations and said data processor comprising a cache directory operable to store a physical address indicator for each storage location comprising stored data; a hash value generator operable to generate a generated hash value from at least some of said bits of said address said generated hash value having fewer bits than said address; a buffer operable to store a plurality of hash values relating to said plurality of storage locations within said cache; wherein in response to a request to access said data item said data processor is operable to compare said generated hash value with at least some of said plurality of hash values stored within said buffer and in response to a match to indicate a indicated storage location of said data item; and said data processor is operable to access
    Type: Grant
    Filed: January 25, 2008
    Date of Patent: March 1, 2011
    Assignees: ARM Limited, Texas Instruments Incorporated
    Inventors: Barry Duane Williamson, Gerard Richard Williams, Muralidharan Santharaman Chinnakonda
  • Patent number: 7886117
    Abstract: A method of memory management is disclosed. The invention increases bank diversity by splitting requests and is also integrated with re-ordering and priority arbitration mechanisms. Therefore, the probabilities of both bank conflicts and write-to-read turnaround conflicts are reduced significantly, so as to increase memory efficiency.
    Type: Grant
    Filed: September 20, 2007
    Date of Patent: February 8, 2011
    Assignee: Realtalk Semiconductor Corp.
    Inventor: Chieh-Wen Shih
  • Publication number: 20090327649
    Abstract: A three-tiered TLB architecture in a multithreading processor that concurrently executes multiple instruction threads is provided. A macro-TLB caches address translation information for memory pages for all the threads. A micro-TLB caches the translation information for a subset of the memory pages cached in the macro-TLB. A respective nano-TLB for each of the threads caches translation information only for the respective thread. The nano-TLBs also include replacement information to indicate which entries in the nano-TLB/micro-TLB hold recently used translation information for the respective thread. Based on the replacement information, recently used information is copied to the nano-TLB if evicted from the micro-TLB.
    Type: Application
    Filed: June 30, 2009
    Publication date: December 31, 2009
    Applicant: MIPS Technologies, Inc.
    Inventors: Soumya BANERJEE, Michael Gottlieb Jensen, Ryan C. Kinter
  • Publication number: 20090164738
    Abstract: A system including a write protected storage device, which utilizes a write cache to hold data intended to be written to the device, determines when data should be allowed to write through to the device instead of being cached. A unique identifier is determined for the requesting process and that identifier is used to check a pre-configured set of processes which have been specified as trusted to write to the device. An exemplary approach uses a dynamic store of process IDs for those processes having made previous requests, a persistent store of application names, and a mapping process to obtain an application name for process IDs which are not yet present in the dynamic store.
    Type: Application
    Filed: December 21, 2007
    Publication date: June 25, 2009
    Applicant: Microsoft Corporation
    Inventors: Shabnam Erfani, Milong Sabandith