Data Cache Being Concurrently Physically Addressed (epo) Patents (Class 711/E12.063)
-
Patent number: 12147675Abstract: Provided is a memory module which includes a controller which, upon reception of a read command including a logical address, converts the logical address included in the read command into a physical address using address lookup information. The controller further inputs a first physical address, which is a portion of the physical address obtained by the conversion, to a non-volatile memory via a first address bus terminal, and then inputs a second physical address, which is a rest of the physical address obtained by the conversion, to the non-volatile memory via a second address bus terminal, to thereby read data corresponding to the physical address obtained by the conversion from the non-volatile memory.Type: GrantFiled: April 16, 2021Date of Patent: November 19, 2024Assignee: SONY SEMICONDUCTOR SOLUTIONS CORPORATIONInventors: Haruhiko Terada, Yotaro Mori, Riichi Nishino, Yoshiyuki Shibahara
-
Patent number: 12014180Abstract: A dynamically-foldable instruction fetch pipeline receives a first fetch request that includes a fetch virtual address and includes first, second and third sub-pipelines that respectively include a translation lookaside buffer (TLB) that translates the fetch virtual address into a fetch physical address, a tag random access memory (RAM) of a physically-indexed physically-tagged set associative instruction cache that receives a set index that selects a set of tag RAM tags for comparison with a tag portion of the fetch physical address to determine a correct way of the instruction cache, and a data RAM of the instruction cache that receives the set index and a way number that together specify a data RAM entry from which to fetch an instruction block. When a control signal indicates a folded mode, the sub-pipelines operate in a parallel manner. When the control signal indicates a unfolded mode, the sub-pipelines operate in a sequential manner.Type: GrantFiled: June 8, 2022Date of Patent: June 18, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Michael N. Michael, Vihar Soneji
-
Patent number: 12008375Abstract: A microprocessor includes a branch target buffer (BTB). Each BTB entry holds a tag based on at least a portion of a virtual address of a block of instructions previously fetched from a physically-indexed physically-tagged set associative instruction cache using a physical address that is a translation of the virtual address, a translated address bit portion of a set index of an instruction cache entry from which the instruction block was previously fetched, and a way number of the instruction cache entry from which the instruction block was previously fetched. In response to a BTB hit based on a fetch virtual address, the BTB provides a translated address bit portion of a predicted set index that is the translated address bit portion of the set index from the hit on BTB entry and a predicted way number that is the way number from the hit on BTB entry.Type: GrantFiled: June 8, 2022Date of Patent: June 11, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Michael N. Michael, Vihar Soneji
-
Patent number: 11954356Abstract: Apparatus, method, and system for efficiently identifying and tracking cold memory pages are disclosed. The apparatus in one embodiment includes one or more processor cores to access memory pages stored in the memory by issuing access requests to the memory and a page index bitmap to track accesses made by the one or more processor cores to the memory pages. The tracked accesses are usable to identify infrequently-accessed memory pages, where the infrequently-accessed memory pages are removed from the memory and stored in a secondary storage.Type: GrantFiled: March 29, 2019Date of Patent: April 9, 2024Assignee: Intel CorporationInventors: Qiuxu Zhuo, Anthony Luck
-
Patent number: 11797665Abstract: A processing system includes a branch prediction structure storing information used to predict the outcome of a branch instruction. The processing system also includes a register storing a first identifier of a first process in response to the processing system changing from a first mode that allows the first process to modify the branch prediction structure to a second mode in which the branch prediction structure is not modifiable. The processing system further includes a processor core that selectively flushes the branch prediction structure based on a comparison of a second identifier of a second process and the first identifier stored in the register. The comparison is performed in response to the second process causing a change from the second mode to the first mode.Type: GrantFiled: June 27, 2019Date of Patent: October 24, 2023Assignee: Advanced Micro Devices, Inc.Inventors: David Kaplan, Marius Evers
-
Patent number: 11625479Abstract: A data cache memory mitigates side channel attacks in a processor that comprises the data cache memory and that includes a translation context (TC). A first input receives a virtual memory address. A second input receives the TC. Control logic, with each allocation of an entry of the data cache memory, uses the received virtual memory address and the received TC to perform the allocation of the entry. The control logic also, with each access of the data cache memory, uses the received virtual memory address and the received TC in a correct determination of whether the access hits in the data cache memory. The TC includes a virtual machine identifier (VMID), or a privilege mode (PM) or a translation regime (TR), or both the VMID and the PM or the TR.Type: GrantFiled: August 27, 2020Date of Patent: April 11, 2023Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 11593109Abstract: Aspects are provided for sharing instruction cache footprint between multiple threads using instruction cache set/way pointers and a tracking table. The tracking table is built up over time for shared pages, even when the instruction cache has no access to real addresses or translation information. A set/way pointer to an instruction cache line is derived from the system memory address associated with a first thread's instruction fetch. The set/way pointer is stored as a surrogate for the system memory address in both an instruction cache directory (IDIR) and a tracking table. Another set/way pointer to an instruction cache line is derived from the system memory address associated with a second thread's instruction fetch. A match is detected between the set/way pointer and the other set/way pointer. The instruction cache directory is updated to indicate that the instruction cache line is shared between multiple threads.Type: GrantFiled: June 7, 2021Date of Patent: February 28, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Sheldon Bernard Levenstein, Nicholas R. Orzol, Christian Gerhard Zoellin, David Campbell
-
Patent number: 11580031Abstract: Systems, methods, and apparatuses relating to hardware for split data translation lookaside buffers. In one embodiment, a processor includes a decode circuit to decode instructions into decoded instructions, an execution circuit to execute the decoded instructions, and a memory circuit comprising a load data translation lookaside buffer circuit and a store data translation lookaside buffer circuit separate and distinct from the load data translation lookaside buffer circuit, wherein the memory circuit sends a memory access request of the instructions to the load data translation lookaside buffer circuit when the memory access request is a load data request and to the store data translation lookaside buffer circuit when the memory access request is a store data request to determine a physical address for a virtual address of the memory access request.Type: GrantFiled: March 9, 2020Date of Patent: February 14, 2023Assignee: Intel CorporationInventors: Stanislav Shwartsman, Igor Yanover, Assaf Zaltsman, Ron Rais
-
Patent number: 11461226Abstract: A memory controller having improved reliability and performance controls an operation of a memory device. The memory controller includes a first core configured to receive requests from a host, each request received with a corresponding first logical address associated with data requested from the host and having a first size, and to perform a logical address processing operation of converting the first logical address into a second logical address having a second size different from the first size; and a second core configured to convert the second logical address into a physical address to or from which the data is to be written or read, the physical address representing a position of a memory cell included in the memory device.Type: GrantFiled: July 1, 2020Date of Patent: October 4, 2022Assignee: SK hynix Inc.Inventor: Seung Won Yang
-
Patent number: 8775510Abstract: The invention provides, in one aspect, an improved system for data access comprising a file server that is coupled to a client device or application executing thereon via one or more networks. The server comprises static storage that is organized in one or more directories, each containing, zero, one or more files. The server also comprises a file system operable, in cooperation with a file system on the client device, to provide authorized applications executing on the client device access to those directories and/or files. Fast file server (FFS) software or other functionality executing on or in connection with the server responds to requests received from the client by transferring requested data to the client device over multiple network pathways. That data can comprise, for example, directory trees, files (or portions thereof), and so forth.Type: GrantFiled: January 31, 2013Date of Patent: July 8, 2014Assignee: PME IP Australia Pty LtdInventors: Malte Westerhoff, Detlev Stalling
-
Patent number: 7930514Abstract: A method, system, and computer program product for implementing a dual-addressable cache is provided. The method includes adding fields for indirect indices to each congruence class provided in a cache directory. The cache directory is indexed by primary addresses. In response to a request for a primary address based upon a known secondary address corresponding to the primary address, the method also includes generating an index for the secondary address, and inserting or updating one of the indirect indices into one of the fields for a congruence class relating to the secondary address. The indirect index is assigned a value of a virtual index corresponding to the primary address. The method further includes searching congruence classes of each of the indirect indices for the secondary address.Type: GrantFiled: February 9, 2005Date of Patent: April 19, 2011Assignee: International Business Machines CorporationInventors: Norbert Hagspiel, Erwin Pfeffer, Bruce A. Wagar
-
Patent number: 7900020Abstract: The application describes a data processor operable to process data, and comprising: a cache in which a storage location of a data item within said cache is identified by an address, said cache comprising a plurality of storage locations and said data processor comprising a cache directory operable to store a physical address indicator for each storage location comprising stored data; a hash value generator operable to generate a generated hash value from at least some of said bits of said address said generated hash value having fewer bits than said address; a buffer operable to store a plurality of hash values relating to said plurality of storage locations within said cache; wherein in response to a request to access said data item said data processor is operable to compare said generated hash value with at least some of said plurality of hash values stored within said buffer and in response to a match to indicate a indicated storage location of said data item; and said data processor is operable to accessType: GrantFiled: January 25, 2008Date of Patent: March 1, 2011Assignees: ARM Limited, Texas Instruments IncorporatedInventors: Barry Duane Williamson, Gerard Richard Williams, Muralidharan Santharaman Chinnakonda
-
Patent number: 7886117Abstract: A method of memory management is disclosed. The invention increases bank diversity by splitting requests and is also integrated with re-ordering and priority arbitration mechanisms. Therefore, the probabilities of both bank conflicts and write-to-read turnaround conflicts are reduced significantly, so as to increase memory efficiency.Type: GrantFiled: September 20, 2007Date of Patent: February 8, 2011Assignee: Realtalk Semiconductor Corp.Inventor: Chieh-Wen Shih
-
Publication number: 20090327649Abstract: A three-tiered TLB architecture in a multithreading processor that concurrently executes multiple instruction threads is provided. A macro-TLB caches address translation information for memory pages for all the threads. A micro-TLB caches the translation information for a subset of the memory pages cached in the macro-TLB. A respective nano-TLB for each of the threads caches translation information only for the respective thread. The nano-TLBs also include replacement information to indicate which entries in the nano-TLB/micro-TLB hold recently used translation information for the respective thread. Based on the replacement information, recently used information is copied to the nano-TLB if evicted from the micro-TLB.Type: ApplicationFiled: June 30, 2009Publication date: December 31, 2009Applicant: MIPS Technologies, Inc.Inventors: Soumya BANERJEE, Michael Gottlieb Jensen, Ryan C. Kinter
-
Publication number: 20090164738Abstract: A system including a write protected storage device, which utilizes a write cache to hold data intended to be written to the device, determines when data should be allowed to write through to the device instead of being cached. A unique identifier is determined for the requesting process and that identifier is used to check a pre-configured set of processes which have been specified as trusted to write to the device. An exemplary approach uses a dynamic store of process IDs for those processes having made previous requests, a persistent store of application names, and a mapping process to obtain an application name for process IDs which are not yet present in the dynamic store.Type: ApplicationFiled: December 21, 2007Publication date: June 25, 2009Applicant: Microsoft CorporationInventors: Shabnam Erfani, Milong Sabandith