Associated With Data Cache (epo) Patents (Class 711/E12.062)
-
Patent number: 8909870Abstract: A storage device includes a non-volatile memory, a cache memory and a memory controller. The non-volatile memory stores a logical-to-physical address translation table for managing partitioned data and storage locations thereof. The cache memory stores a data cache and a logical-to-physical address translation table cache which holds a portion of the logical-to-physical address translation table. When the memory controller receives a data read-out request from outside, in the case no empty entry is found in the data cache, among the partitioned data in the data cache, it creates an empty entry to read out the data thereto by evacuating partitioned data of which entries in the logical-to-physical address translation table exist in the logical-to-physical address translation table cache into the non-volatile memory prior to other partitioned data.Type: GrantFiled: October 26, 2012Date of Patent: December 9, 2014Assignee: Hitachi, Ltd.Inventors: Ryoichi Inada, Ryo Fujita, Takuma Nishimura
-
Publication number: 20130238875Abstract: A memory management unit can receive an address associated with a page size that is unknown to the MMU. The MMU can concurrently determine whether a translation lookaside buffer data array stores a physical address associated with the address based on different portions of the address, where each of the different portions is associated with a different possible page size. This provides for efficient translation lookaside buffer data array access when different programs, employing different page sizes, are concurrently executed at a data processing device.Type: ApplicationFiled: March 8, 2012Publication date: September 12, 2013Applicant: FREESCALE SEMICONDUCTOR, INC.Inventors: Ravindraraj Ramaraju, Eric V. Fiene, Jogendra C. Sarker
-
Patent number: 8397049Abstract: In an embodiment, a memory management unit (MMU) is configured to retain a block of data that includes multiple page table entries. The MMU is configured to check the block in response to TLB misses, and to supply a translation from the block if the translation is found in the block without generating a memory read for the translation. In some embodiments, the MMU may also maintain a history of the TLB misses that have used translations from the block, and may generate a prefetch of a second block based on the history. For example, the history may be a list of the most recently used Q page table entries, and the history may show a pattern of access that are nearing an end of the block. In another embodiment, the history may comprise a count of the number of page table entries in the block that have been used.Type: GrantFiled: July 13, 2009Date of Patent: March 12, 2013Assignee: Apple Inc.Inventors: James Wang, Zongjian Chen
-
Patent number: 8370582Abstract: A method of merging subsequent updates to a memory location includes receiving, at a first stage in an update pipeline, a first request to update a status word at a first address of a cache memory and receiving the status word from the cache memory. The method continues with determining, at a stage subsequent to the first stage, that a second request to update the status word has been received. Further included is updating the status word according to the first and second requests to form an updated status word and writing the updated status word to the cache memory.Type: GrantFiled: January 26, 2010Date of Patent: February 5, 2013Assignee: Hewlett-Packard Development Company, L.P.Inventor: Chris Brueggen
-
Patent number: 8370578Abstract: Provided is a storage controller and method of controlling same which, if part of a storage area of a local memory is used as cache memory, enable an access conflict for access to a parallel bus connected to the local memory to be avoided. A storage controller which exercises control of data between a host system and a storage apparatus, comprising a data transfer control unit which exercises control to transfer the data on the basis of a read/write request from the host system; a cache memory which is connected to the data transfer control unit via a parallel bus; a control unit which is connected to the data transfer control unit via a serial bus; and a local memory which is connected to the control unit via a parallel bus, wherein the control unit decides to assign, from a cache segment of either the cache memory or the local memory, a storage area which stores the data on the basis of a CPU operating rate and a path utilization of the parallel bus connected to the cache memory.Type: GrantFiled: March 3, 2011Date of Patent: February 5, 2013Assignee: Hitachi, Ltd.Inventors: Yoshihiro Yoshii, Mitsuru Inoue, Kentaro Shimada, Sadahiro Sugimoto
-
Publication number: 20120226871Abstract: This invention is a method and system for replacing an entry in a cache memory (replacement policy). The cache is divided into a high-priority class and a low-priority class. Upon a request for information such as data, an instruction, or an address translation, the processor searches the cache. If there is a cache miss, the processor locates the information elsewhere, typically in memory. The found information replaces an existing entry in the cache. The entry selected for replacement (eviction) is chosen from within the low-priority class using a FIFO algorithm. Upon a cache hit, the processor performs a read, write, or execute using or upon the information. If the performed instruction was a “write”, the information is placed into the high-priority class. If the high-priority class is full, an entry within the high-priority class is selected for removal based on a FIFO algorithm, and re-classified into the low-priority class.Type: ApplicationFiled: March 3, 2011Publication date: September 6, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jason Frederick Cantin, Prasenjit Chakraborty
-
Publication number: 20120198121Abstract: A method for minimizing cache conflict misses is disclosed. A translation table capable of facilitating the translation of a virtual address to a real address during a cache access is provided. The translation table includes multiple entries, and each entry of the translation table includes a page number field and a hash value field. A hash value is generated from a first group of bits within a virtual address, and the hash value is stored in the hash value field of an entry within the translation table. In response to a match on the entry within the translation table during a cache access, the hash value of the matched entry is retrieved from the translation table, and the hash value is concatenated with a second group of bits within the virtual address to form a set of indexing bits to index into a cache set.Type: ApplicationFiled: January 28, 2011Publication date: August 2, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: ROBERT H. BELL, JR., MEN-CHOW CHIANG, HONG L. HUA
-
Patent number: 8074028Abstract: The present solution provides a multi-tiered caching and cache indexing system is depicted. A cache management system uses a memory based object index to reference or identify corresponding objects stored in disk. The memory used to index object may grow proportionally or in relation to growth in the size of the disk. The techniques described herein minimize, reduce or maintain the size of memory for an object index although the size of storage for storing objects is changed. These techniques allow for more optimal use of memory for object indexing while increasing or decreasing disk size for object storage.Type: GrantFiled: March 12, 2007Date of Patent: December 6, 2011Assignee: Citrix Systems, Inc.Inventor: Robert Plamondon
-
Publication number: 20110153952Abstract: Systems, methods, and apparatus for performing the flushing of a plurality of cache lines and/or the invalidation of a plurality of translation look-aside buffer (TLB) entries is described. In one such method, for flushing a plurality of cache lines of a processor a single instruction including a first field that indicates that the plurality of cache lines of the processor are to be flushed and in response to the single instruction, flushing the plurality of cache lines of the processor.Type: ApplicationFiled: December 22, 2009Publication date: June 23, 2011Inventors: Martin G. Dixon, Scott D. Rodgers
-
Patent number: 7966452Abstract: A cache memory processing system is disclosed that is coupled to a main memory and a processing unit. The cache memory processing system includes an input, a low order bit data path, a high order bit data path and an output. The input is for receiving input data that includes at least one low order input bit and at least one high order input bit. The low order bit data path is for processing the at least one low order input bit and providing at least one low order output bit. The high order bit data path for processing the at least one high order input bit and providing at least one high order output bit. The high order bit data path includes at least one exclusive or gate. The output is for providing the at least one low order output bit and the at least one high order output bit.Type: GrantFiled: January 22, 2008Date of Patent: June 21, 2011Assignee: Board of Governors for Higher Education, State of Rhode Island and Providence PlantationsInventor: Qing Yang
-
Publication number: 20110078388Abstract: In computing environments that use virtual addresses (or other indirectly usable addresses) to access memory, the virtual addresses are translated to absolute addresses (or other directly usable addresses) prior to accessing memory. To facilitate memory access, however, address translation is omitted in certain circumstances, including when the data to be accessed is within the same unit of memory as the instruction accessing the data. In this case, the absolute address of the data is derived from the absolute address of the instruction, thus avoiding address translation for the data. Further, in some circumstances, access checking for the data is also omitted.Type: ApplicationFiled: September 29, 2009Publication date: March 31, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Viktor S. Gyuris, Ali Sheikh, Kirk A. Stewart
-
Publication number: 20110072235Abstract: One embodiment of the present invention sets forth a system and method for supporting high-throughput virtual to physical address translation using compressed TLB cache lines with variable address range coverage. The amount of memory covered by a TLB cache line depends on the page size and page table entry (PTE) compression level. When a TLB miss occurs, a cache line is allocated with an assumed address range that may be larger or smaller than the address range of the PTE data actually returned. Subsequent requests that hit a cache line with a fill pending are queued until the fill completes. When the fill completes, the cache line's address range is set to the address range of the PTE data returned. Queued requests are replayed and any that fall outside the actual address range are reissued, potentially generating additional misses and fills.Type: ApplicationFiled: August 5, 2010Publication date: March 24, 2011Inventors: James Leroy Deming, Mark Allen Mosley, William Craig McKnight, Emmett M. Kilgrariff, Steven E. Molnar, Colyn Scott Case
-
Publication number: 20110010521Abstract: In an embodiment, a memory management unit (MMU) is configured to retain a block of data that includes multiple page table entries. The MMU is configured to check the block in response to TLB misses, and to supply a translation from the block if the translation is found in the block without generating a memory read for the translation. In some embodiments, the MMU may also maintain a history of the TLB misses that have used translations from the block, and may generate a prefetch of a second block based on the history. For example, the history may be a list of the most recently used Q page table entries, and the history may show a pattern of access that are nearing an end of the block. In another embodiment, the history may comprise a count of the number of page table entries in the block that have been used.Type: ApplicationFiled: July 13, 2009Publication date: January 13, 2011Inventors: James Wang, Zongjian Chen
-
Publication number: 20100312957Abstract: A translation look-aside buffer (TLB) has a TAG memory for determining if a desired translated address is stored in the TLB. A TAG portion is compared to contents of the TAG memory without requiring a read of the TAG memory because the TAG memory has a storage portion that is constructed as a CAM. For each row of the CAM a match determination is made that indicates if the TAG portion is the same as contents of the particular row. A decoder decodes an index portion and provides an output for each row. On a per row basis the output of the decoder is logically combined with the hit/miss signals to determine if there is a hit for the TAG memory. If there is a hit for the TAG memory, a translated address corresponding to the index portion of the address is then output as the selected translated address.Type: ApplicationFiled: June 9, 2009Publication date: December 9, 2010Inventors: Ravindraraj Ramaraju, Jogendra C. Sarker, Vu N. Tran
-
Publication number: 20100185823Abstract: Techniques for enabling high-performance computing are provided. The techniques include resizing a logical partition in a non-dedicated compute cluster server to enable high-performance computing, wherein a high performance computing application is executed such that the high performance computing application is configured to complete execution of each of one or more application threads at a similar instance as a slowest thread in the cluster, and wherein the non-dedicated compute cluster comprises one or more servers and the logical partition is created by partitioning one or more server resources.Type: ApplicationFiled: January 21, 2009Publication date: July 22, 2010Applicant: International Business Machines CorporationInventors: Pradipta De, Ravi Kothari, Vijay Mann, Rajiv Sachdev
-
Publication number: 20100122062Abstract: In one embodiment, an input/output (I/O) memory management unit (IOMMU) comprises at least one memory and control logic coupled to the memory. The memory is configured to store translation data corresponding to one or more I/O translation tables stored in a memory system of a computer system that includes the IOMMU. The control logic is configured to translate an I/O device-generated memory request using the translation data. The translation data includes a type field indicating one or more attributes of the translation, and the control logic is configured to control the translation responsive to the type field.Type: ApplicationFiled: January 11, 2010Publication date: May 13, 2010Inventors: Mark D. Hummel, Geoffrey S. Strongin, Andrew W. Lueck
-
Publication number: 20100100685Abstract: An effective address cache memory includes a TLB effective page memory configured to retain entry data including an effective page tag of predetermined high-order bits of an effective address of a process, and output a hit signal when the effective page tag matches the effective page tag from a processor; a data memory configured to retain cache data with the effective page tag or a page offset as a cache index; and a cache state memory configured to retain a cache state of the cache data stored in the data memory, in a manner corresponding to the cache index.Type: ApplicationFiled: October 16, 2009Publication date: April 22, 2010Applicant: Kabushihiki Kaisha ToshibaInventors: Yasuhiko Kurosawa, Shigeaki Iwasa, Seiji Maeda, Nobuhiro Yoshida, Mitsuo Saito, Hiroo Hayashi
-
Publication number: 20090172243Abstract: In one embodiment, the present invention includes a translation lookaside buffer (TLB) to store entries each having a translation portion to store a virtual address (VA)-to-physical address (PA) translation and a second portion to store bits for a memory page associated with the VA-to-PA translation, where the bits indicate attributes of information in the memory page. Other embodiments are described and claimed.Type: ApplicationFiled: December 28, 2007Publication date: July 2, 2009Inventors: David Champagne, Abhishek Tiwari, Wei Wu, Christopher J. Hughes, Sanjeev Kumar, Shih-Lien Lu
-
Publication number: 20090172292Abstract: A method and apparatus for accelerating lookups in an address based table is herein described. When an address and value pair is added to an address based table, the value is privately stored in the address to allow for quick and efficient local access to the value. In response to the private store, a cache line holding the value is transitioned to a private state, to ensure the value is not made globally visible. Upon eviction of the privately held cache line, the information is not written-back to ensure locality of the value. In one embodiment, the address based table includes a transactional write buffer to hold addresses, which correspond to tentatively updated values during a transaction. Accesses to the tentative values during the transaction may be accelerated through use of annotation bits and private stores as discussed herein. Upon commit of the transaction, the values are copied to the location to make the updates globally visible.Type: ApplicationFiled: December 27, 2007Publication date: July 2, 2009Inventors: Bratin Saha, Ali-Reza Adl-Tabatabai, Ethan Schuchman
-
Publication number: 20080140956Abstract: An advanced, processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.Type: ApplicationFiled: January 22, 2008Publication date: June 12, 2008Inventors: David T. Hass, Basab Mukherjee
-
Publication number: 20080133840Abstract: A system and method of improved handling of large pages in a virtual memory system. A data memory management unit (DMMU) detects sequential access of a first sub-page and a second sub-page out of a set of sub-pages that comprise a same large page. Then, the DMMU receives a request for the first sub-page and in response to such a request, the DMMU instructs a pre-fetch engine to pre-fetch at least the second sub-page if the number of detected sequential accesses equals or exceeds a predetermined value.Type: ApplicationFiled: January 17, 2008Publication date: June 5, 2008Inventors: Vaijayanthimala K. Anand, Sandra K. Johnson