Directories And Tables (e.g., Dlat, Tlb) Patents (Class 711/205)
  • Patent number: 9405700
    Abstract: Various methods and apparatus are described for communicating transactions between one or more initiator IP cores and one or more target IP cores coupled to an interconnect. A centralized Memory Management logic Unit (MMU) is located in the interconnect for virtualization and sharing of integrated circuit resources including target cores between the one or more initiator IP cores. A master translation look aside buffer (TLB) stores virtualization and sharing information in the entries of the master TLB. A set of two or more translation look aside buffers (TLBs) locally store virtualization and sharing information replicated from the master TLB. Logic in the MMU or other software updates the virtualization and sharing information replicated from the master TLB in the entries of one or more of the set of local TLBs.
    Type: Grant
    Filed: November 3, 2011
    Date of Patent: August 2, 2016
    Assignee: Sonics, Inc.
    Inventor: Drew E. Wingard
  • Patent number: 9330020
    Abstract: Detailed herein are systems, apparatuses, and methods for transparent page level instruction translation. Exemplary embodiments include an instruction translation lookaside buffer (iTLB), wherein each iTLB entry includes a linear address of a page in memory, a physical address of the page in memory, and a remapping indicator.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: May 3, 2016
    Assignee: Intel Corporation
    Inventors: Paul Caprioli, Vedvyas Shanbhogue, Koichi Yamada
  • Patent number: 9323534
    Abstract: A method includes determining, for a first thread of execution, a first speculative decoded operands signal and determining, for a second thread of execution, a second speculative decoded operands signal. The method further includes determining, for the first thread of execution, a first constant and determining, for the second thread of execution, a second constant. The method further compares the first speculative decoded operands signal to the second speculative decoded operands signal and uses the first and second constant to detect a wordline collision for accessing the memory array.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: April 26, 2016
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Ravindraraj Ramaraju, Kathryn C. Stacer
  • Patent number: 9319491
    Abstract: Methods and systems for a more efficient transmission of network traffic are provided. According to one embodiment, payload data originated by a user process running on a host processor of a network device is fetched by an interface of the network device by performing direct virtual memory addressing of a user memory space of a system memory of the network device on behalf of a network interface unit of the network device. The direct virtual memory addressing maps physical addresses of various portions of the payload data to corresponding virtual addresses. The payload data is segmented by the network interface unit across one or more packets.
    Type: Grant
    Filed: January 14, 2016
    Date of Patent: April 19, 2016
    Assignee: Fortinet, Inc.
    Inventors: Xu Zhou, David Chen, Lin Huang, Guansong Zhang
  • Patent number: 9189417
    Abstract: A method includes performing a speculative tablewalk. The method includes performing a tablewalk to determine an address translation for a speculative operation and determining whether the speculative operation has been upgraded to a non-speculative operation concurrently with performing the tablewalk. An apparatus is provided that includes a load-store unit to maintain execution operations. The load-store unit includes a tablewalker to perform a tablewalk and includes an input indicative of the operation being speculative or non-speculative as well as a state machine to determine actions performed during the tablewalk based on the input. The apparatus also includes a translation look-aside buffer. Computer readable storage devices for performing the methods and adapting a fabrication facility to manufacture the apparatus are provided.
    Type: Grant
    Filed: November 8, 2012
    Date of Patent: November 17, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: David A. Kaplan, Stephen P. Thompson
  • Patent number: 9081706
    Abstract: The disclosed embodiments provide techniques for reducing address-translation latency and the serialization latency of combined TLB and data cache misses in a coherent shared-memory system. For instance, the last-level TLB structures of two or more multiprocessor nodes can be configured to act together as either a distributed shared last-level TLB or a directory-based shared last-level TLB. Such TLB-sharing techniques increase the total amount of useful translations that are cached by the system, thereby reducing the number of page-table walks and improving performance. Furthermore, a coherent shared-memory system with a shared last-level TLB can be further configured to fuse TLB and cache misses such that some of the latency of data coherence operations is overlapped with address translation and data cache access latencies, thereby further improving the performance of memory operations.
    Type: Grant
    Filed: May 10, 2012
    Date of Patent: July 14, 2015
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Pranay Koka, Michael O. McCracken, Herbert D. Schwetman, Jr., David A. Munday
  • Patent number: 9069690
    Abstract: In an embodiment, a page miss handler includes paging caches and a first walker to receive a first linear address portion and to obtain a corresponding portion of a physical address from a paging structure, a second walker to operate concurrently with the first walker, and a logic to prevent the first walker from storing the obtained physical address portion in a paging cache responsive to the first linear address portion matching a corresponding linear address portion of a concurrent paging structure access by the second walker. Other embodiments are described and claimed.
    Type: Grant
    Filed: September 13, 2012
    Date of Patent: June 30, 2015
    Assignee: Intel Corporation
    Inventors: Gur Hildesheim, Chang Kian Tan, Robert S. Chappell, Rohit Bhatia
  • Patent number: 9032145
    Abstract: A memory device includes an address protection system that facilitates the ability of the memory device to interface with a plurality of processors operating in a parallel processing manner. The protection system is used to prevent at least some of a plurality of processors in a system from accessing addresses designated by one of the processors as a protected memory address. Until the processor releases the protection, only the designating processor can access the memory device at the protected address. If the memory device contains a cache memory, the protection system can alternatively or additionally be used to protect cache memory addresses.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: May 12, 2015
    Assignee: Micron Technology, Inc.
    Inventor: David Resnick
  • Patent number: 9015400
    Abstract: A computer system and a method are provided that reduce the amount of time and computing resources that are required to perform a hardware table walk (HWTW) in the event that a translation lookaside buffer (TLB) miss occurs. If a TLB miss occurs when performing a stage 2 (S2) HWTW to find the PA at which a stage 1 (S1) page table is stored, the MMU uses the IPA to predict the corresponding PA, thereby avoiding the need to perform any of the S2 table lookups. This greatly reduces the number of lookups that need to be performed when performing these types of HWTW read transactions, which greatly reduces processing overhead and performance penalties associated with performing these types of transactions.
    Type: Grant
    Filed: March 5, 2013
    Date of Patent: April 21, 2015
    Assignee: QUALCOMM Incorporated
    Inventors: Thomas Zeng, Azzedine Touzni, Tzung Ren Tzeng, Phil J. Bostley
  • Patent number: 9009446
    Abstract: The disclosed embodiments provide a system that uses broadcast-based TLB-sharing techniques to reduce address-translation latency in a shared-memory multiprocessor system with two or more nodes that are connected by an electrical interconnect. During operation, a first node receives a memory operation that includes a virtual address. Upon determining that one or more TLB levels of the first node will miss for the virtual address, the first node uses the electrical interconnect to broadcast a TLB request to one or more additional nodes of the shared-memory multiprocessor in parallel with scheduling a speculative page-table walk for the virtual address. If the first node receives a TLB entry from another node of the shared-memory multiprocessor via the electrical interconnect in response to the TLB request, the first node cancels the speculative page-table walk. Otherwise, if no response is received, the first node instead waits for the completion of the page-table walk.
    Type: Grant
    Filed: August 2, 2012
    Date of Patent: April 14, 2015
    Assignee: Oracle International Corporation
    Inventors: Pranay Koka, David A. Munday, Michael O. McCracken, Herbert D. Schwetman, Jr.
  • Patent number: 9009386
    Abstract: A system includes a memory device including a real memory and a tracking mechanism configured to track relationships between multiple virtual memory addresses and real memory. The system further includes a processor configured to perform the below method and/or execute the below computer program product. One method includes mapping a first virtual memory address to a real memory in a memory device and mapping a second virtual memory address to the real memory. Here, the first virtual memory address is authorized to modify data in the real memory and the second virtual memory address is not authorized to modify the data in the real memory. One computer storage medium includes a computer program product for performing the above method.
    Type: Grant
    Filed: December 13, 2010
    Date of Patent: April 14, 2015
    Assignee: International Business Machines Corporation
    Inventors: Brian D. Hatfield, Wenjeng Ko, Lei Liu
  • Patent number: 9003161
    Abstract: A first virtual memory address is mapped to a real memory in a memory device, and a second virtual memory address is mapped to the real memory. Here, the first virtual memory address is authorized to modify data in the real memory and the second virtual memory address is not authorized to modify the data in the real memory.
    Type: Grant
    Filed: June 11, 2012
    Date of Patent: April 7, 2015
    Assignee: International Business Machines Corporation
    Inventors: Brian D. Hatfield, Wenjeng Ko, Lei Liu
  • Publication number: 20150082000
    Abstract: A memory management unit comprises an address translation unit that receives a memory access request as a virtual address and translates the virtual address to a physical address. A translation lookaside buffer stores page descriptors of a plurality of physical addresses, the address translation unit determining whether a page descriptor of a received virtual address is present in the translation lookaside buffer. A prefetch buffer stores page descriptors of the plurality of physical addresses. The address translation unit, in the event the page descriptor of the received virtual address is not present in the translation lookaside buffer, further determines whether the page descriptor of the received virtual address is present in the prefetch buffer; updates the translation lookaside buffer with the page descriptor in response to the determination; and performs a translation of the virtual address to a physical address using the page descriptor.
    Type: Application
    Filed: August 19, 2014
    Publication date: March 19, 2015
    Inventors: Sung-Min Hong, Sim Ji Lee, JaeYoung Hur, JiWoong Kwon, Il Park, Jong-Jin Lee, Jinyong Jung
  • Publication number: 20150058592
    Abstract: A chip multiprocessor includes a plurality of cores each having a translation lookaside buffer (TLB) and a prefetch buffer (PB). Each core is configured to determine a TLB miss on the core's TLB for a virtual page address and determine whether or not there is a PB hit on a PB entry in the PB for the virtual page address. If it is determined that there is a PB hit, the PB entry is added to the TLB. If it is determined that there is not a PB hit, the virtual page address is used to perform a page walk to determine a translation entry, the translation entry is added to the TLB and the translation entry is prefetched to each other one of the plurality of cores.
    Type: Application
    Filed: October 3, 2014
    Publication date: February 26, 2015
    Inventors: Abhishek Bhattacharjee, Margaret Martonosi
  • Patent number: 8966221
    Abstract: A lookup operation is performed in a translation look aside buffer based on a first translation request as current translation request, wherein a respective absolute address is returned to a corresponding requestor for the first translation request as translation result in case of a hit. A translation engine is activated to perform at least one translation table fetch in case the current translation request does not hit an entry in the translation look aside buffer, wherein the translation engine is idle waiting for the at least one translation table fetch to return data, reporting the idle state of the translation engine as lookup under miss condition and accepting a currently pending translation request as second translation request, wherein a lookup under miss sequence is performed in the translation look aside buffer based on said second translation request.
    Type: Grant
    Filed: June 21, 2011
    Date of Patent: February 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Ute Gaertner, Thomas Koehler
  • Patent number: 8959302
    Abstract: An exemplary computer system includes a server module including a first processor and first memory, a storage module including a second processor, a second memory and a storage device, and a transfer module. The transfer module retrieves a first transfer list including an address of a first storage area, which is set on the first memory for a read command, from the server module. The transfer module retrieves a second transfer list including an address of a second storage area in the second memory, in which data corresponding to the read command read from the storage device is stored temporarily, from the storage module. The transfer module sends the data corresponding to the read command in the second storage area to the first storage area by controlling the data transfer between the second storage area and the first storage area based on the first and second transfer lists.
    Type: Grant
    Filed: August 7, 2014
    Date of Patent: February 17, 2015
    Assignee: Hitachi, Ltd.
    Inventors: Yuki Kondoh, Isao Ohara
  • Patent number: 8954435
    Abstract: A method for storage reclamation in a shared storage device. The method includes executing a distributed computer system having a plurality of file systems accessing storage on a shared storage device, and initiating a reclamation operation by using a reclamation agent that accesses the shared storage device. The method further includes reading the file system data structure that represent unallocated storage blocks of one of the plurality of file systems that will undergo a reclamation operation. A plurality of I/O resources that are used to provide I/O to the unallocated storage blocks are then interrupted. Storage from the unallocated storage blocks is then reclaimed, and normal operation of the I/O resources that are used to provide I/O to the unallocated storage blocks is resumed.
    Type: Grant
    Filed: April 22, 2011
    Date of Patent: February 10, 2015
    Assignee: Symantec Corporation
    Inventors: Kedar Shrikrishna Patwardhan, Anirban Mukherjee, Kirubakaran Kaliannan
  • Patent number: 8954697
    Abstract: A system configures page tables to cause an operating system to copy original page data in a data store when any one of the application processes makes a first write request for the original page data. The system detects a page fault from a memory management unit receiving a first write request from one of the application processes and creates the copy in physical memory to allow the application process to modify the page data copy. The other application processes have read access to the original page data. The system replaces the original page data in the data store with the page data copy in response to receiving a first synchronization request from the application process and updates a page table for one of the other application processes to configure access to the replaced page data in response to receiving a second synchronization request from the one other application process.
    Type: Grant
    Filed: August 5, 2010
    Date of Patent: February 10, 2015
    Assignee: Red Hat, Inc.
    Inventors: Neil R. T. Horman, Eric L. Paris, Jeffrey T. Layton
  • Patent number: 8954648
    Abstract: The invention provides a memory device. In one embodiment, the memory device comprises a flash memory, a memory, and a controller. The flash memory comprises a plurality of blocks for data storage. The memory stores an address mapping table recording relationships between logical addresses and physical addresses of the blocks therein. The controller divides the address mapping table stored in the memory to a plurality of mapping table units, updates relationships between the logical addresses and the physical addresses stored in the mapping table units, determines whether data access performed to the flash memory fulfills the conditions of a first specific requirement, and when the data access fulfills the conditions of the first requirement, the controller selects a target mapping table unit from the mapping table units, and stores the target mapping table unit and a corresponding time stamp as a mapping table unit data to the flash memory.
    Type: Grant
    Filed: July 11, 2011
    Date of Patent: February 10, 2015
    Assignee: Via Technologies, Inc.
    Inventors: Liang Chen, Chen Xiu
  • Patent number: 8938602
    Abstract: A first processing unit and a second processing unit can access a system memory that stores a common page table that is common to the first processing unit and the second processing unit. The common page table can store virtual memory addresses to physical memory addresses mapping for memory chunks accessed by a job of an application. A page entry, within the common page table, can include a first set of attribute bits that defines accessibility of the memory chunk by the first processing unit, a second set of attribute bits that defines accessibility of the same memory chunk by the second processing unit, and physical address bits that define a physical address of the memory chunk.
    Type: Grant
    Filed: August 2, 2012
    Date of Patent: January 20, 2015
    Assignee: QUALCOMM Incorporated
    Inventors: Colin Christopher Sharp, Thomas Andrew Sartorius
  • Patent number: 8930649
    Abstract: A method begins by a dispersed storage (DS) processing module concurrently receiving a first data stream and a second data stream for transmission to a receiving entity. The method continues with the DS processing module segmenting each of the first and second data streams to produce a first plurality of data segments and a second plurality of data segments, dividing one of the first plurality of data segments into a first plurality of data blocks, and dividing one of the second plurality of data segments into a second plurality of data blocks. The method continues with the DS processing module creating a data matrix from the first and second plurality of data blocks and generating a coded matrix from the data matrix and an encoding matrix. The method continues with the DS processing module outputting one or more pairs of coded values of the coded matrix to the receiving entity.
    Type: Grant
    Filed: August 2, 2012
    Date of Patent: January 6, 2015
    Assignee: Cleversafe, Inc.
    Inventors: Gary W. Grube, Timothy W. Markison
  • Patent number: 8924684
    Abstract: Approaches are described for reducing the number of memory address cache (e.g. TLB) flushes that need to be performed during the course of performing virtualized I/O. A device driver residing in a host domain registers a CPU that will be used for I/O processing and requests the hypervisor to pre-allocate a number of slots in the page tables to map memory pages during I/O operations. Upon receiving an I/O operation, when memory needs to be mapped, the driver provides the hypervisor with information about the registered CPU. The hypervisor uses the pre-allocated page table slots to create the new mapping and flushes the TLB cache corresponding to the CPU that will perform the I/O. The TLB cache belonging to other CPUs may not need to be flushed. The host driver ensures that the mapped memory page is used exclusively on the CPU or performs additional TLB flushes.
    Type: Grant
    Filed: June 13, 2012
    Date of Patent: December 30, 2014
    Assignee: Amazon Technologies, Inc.
    Inventor: Pradeep Vincent
  • Patent number: 8914609
    Abstract: A computing device includes an interface, memory, and a processing module. The memory stores a directory and inode tables. The directory stores a file identifier and a corresponding inumber for each file that is stored in storage units. An inode table stores an inumber, metadata, and a DSN address for each file stored in a corresponding storage unit. The processing module is operable to monitor, for each of the inode tables, utilization of the memory. The processing module is further operable to monitor, for each of the storage units, utilization of memory of the storage units. The processing module is further operable to process, for the inode table and/or the corresponding storage unit, per inode table memory utilization data and per storage unit memory utilization data to adjust memory utilization of the inode table and/or memory utilization of the corresponding storage unit.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: December 16, 2014
    Assignee: Cleversafe, Inc.
    Inventors: Jason K. Resch, Gary W. Grube, S. Christopher Gladwin
  • Patent number: 8909851
    Abstract: A method of operation of a storage control system including: providing a memory controller; accessing a volatile memory table by the memory controller; writing a non-volatile semiconductor memory for persisting changes in the volatile memory table; and restoring a logical-to-physical table in the volatile memory table, after a power cycle, by restoring a random access memory with a logical-to-physical partition from a most recently used list.
    Type: Grant
    Filed: February 8, 2012
    Date of Patent: December 9, 2014
    Assignee: Smart Storage Systems, Inc.
    Inventors: Ryan Jones, Robert W. Ellis, Joseph Taylor
  • Patent number: 8904123
    Abstract: A virtual logical unit that stores learning metadata is allocated in a first storage server having a first plurality of clusters, wherein the learning metadata indicates a type of storage device in which selected data of the first plurality of clusters of the first storage server are stored. A copy services command is received to copy the selected data from the first storage server to a second storage server having a second plurality of clusters. The virtual logical unit that stores the learning metadata is copied, from the first storage server to the second storage server, via the copy services command. Selected logical units corresponding to the selected data are copied from the first storage server to the second storage server, and the learning metadata is used to place the selected data in the type of storage device indicated by the learning metadata.
    Type: Grant
    Filed: June 25, 2013
    Date of Patent: December 2, 2014
    Assignee: International Business Machines Corporation
    Inventors: Joshua J. Crawford, Benjamin J. Donie, Andreas B. Koster
  • Patent number: 8897573
    Abstract: A system and an article of manufacture for de-duplicating virtual machine image accesses include identifying one or more identical blocks in two or more images in a virtual machine image repository, generating a block map for mapping different blocks with identical content into a same block, deploying a virtual machine image by reconstituting an image from the block map and fetching any unique blocks remotely on-demand, and de-duplicating virtual machine image accesses by storing the deployed virtual machine image in a local disk cache.
    Type: Grant
    Filed: August 17, 2012
    Date of Patent: November 25, 2014
    Assignee: International Business Machines Corporation
    Inventors: Han Chen, Alexei A. Karve, Minkyong Kim, Andrzej P. Kochut, Hui Lei, Jayaram Kallapalayam Radhakrishnan, Zhiming Shen, Zhe Zhang
  • Patent number: 8898430
    Abstract: A data processing apparatus having a memory configured to store tables having virtual to physical address translations, a cache configured to store a subset of the virtual to physical address translations and cache management circuitry configured to control transactions received from the processor requesting virtual address to physical address translations. The data processing apparatus identifies where a faulting transaction has occurred during execution of a context and whether the faulting transaction has a transaction stall or transaction terminate fault. The cache management circuitry is responsive to identification of the faulting transaction having a transaction terminate fault to invalidate all address translations in the cache that relate to the context of the faulting transaction such that a valid bit associated with each entry in the cache is set to invalid for the address translations.
    Type: Grant
    Filed: December 5, 2012
    Date of Patent: November 25, 2014
    Assignee: ARM Limited
    Inventors: Viswanath Chakrala, Timothy Nicholas Hay, Stuart David Biles
  • Publication number: 20140331095
    Abstract: This document discusses, among other things, an example system and methods for memory expansion. An example embodiment includes receiving a memory request from a memory controller over a channel. Based on the memory request, the example embodiment includes selecting a location in memory to couple to a sub-channel of the channel and configuring the set of field effect transistors to couple the channel with the sub-channel. In the example embodiment, data may be allowed to flow between the memory controller and the location in the memory over the channel and the sub-channel.
    Type: Application
    Filed: July 21, 2014
    Publication date: November 6, 2014
    Inventors: Mario Mazzola, Satyanarayana Nishtala, Luca Cafiero, Philip Manela
  • Patent number: 8880784
    Abstract: Disclosed is a method for managing logical block write requests for a flash drive. The method includes receiving a logical block write request from a file system; assigning a category to the logical block; and generating at least three writes from the logical block write request, a first write writes the logical block to an Erasure Unit (EU) according to the category assigned to each logical block, a second write inserts a Block Mapping Table (BMT) update entry to a BMT update log, and a third write commits the BMT update entry to an on-disk BMT, wherein the first and second writes are performed synchronously and the third write is performed asynchronously and in a batched fashion.
    Type: Grant
    Filed: May 18, 2010
    Date of Patent: November 4, 2014
    Assignee: Rether Networks Inc.
    Inventors: Tzi-cker Chiueh, Maohua Lu, Pi-Yuan Cheng, Goutham Meruva
  • Patent number: 8880844
    Abstract: A chip multiprocessor includes a plurality of cores each having a translation lookaside buffer (TLB) and a prefetch buffer (PB). Each core is configured to determine a TLB miss on the core's TLB for a virtual page address and determine whether or not there is a PB hit on a PB entry in the PB for the virtual page address. If it is determined that there is a PB hit, the PB entry is added to the TLB. If it is determined that there is not a PB hit, the virtual page address is used to perform a page walk to determine a translation entry, the translation entry is added to the TLB and the translation entry is prefetched to each other one of the plurality of cores.
    Type: Grant
    Filed: March 12, 2010
    Date of Patent: November 4, 2014
    Assignee: Trustees of Princeton University
    Inventors: Abhishek Bhattacharjee, Margaret Martonosi
  • Patent number: 8868822
    Abstract: A data-processing method in a flash memory with a plurality of sectors, the method includes arranging first data which is not updated in a first sector at a leading portion of a second sector and adding a first identifier of the first data to the second sector by a memory control circuit when transferring data in the first sector to the second sector, the plurality of sectors including the first sector and the second sector.
    Type: Grant
    Filed: March 3, 2011
    Date of Patent: October 21, 2014
    Assignee: Spansion LLC
    Inventor: Hiroyuki Komori
  • Patent number: 8868865
    Abstract: An exemplary computer system includes a server module including a first processor and first memory, a storage module including a second processor, a second memory and a storage device, and a transfer module. The transfer module retrieves a first transfer list including an address of a first storage area, which is set on the first memory for a read command, from the server module. The transfer module retrieves a second transfer list including an address of a second storage area in the second memory, in which data corresponding to the read command read from the storage device is stored temporarily, from the storage module. The transfer module sends the data corresponding to the read command in the second storage area to the first storage area by controlling the data transfer between the second storage area and the first storage area based on the first and second transfer lists.
    Type: Grant
    Filed: October 3, 2013
    Date of Patent: October 21, 2014
    Assignee: Hitachi, Ltd.
    Inventors: Yuki Kondoh, Isao Ohara
  • Patent number: 8868863
    Abstract: Various embodiments provide a method and apparatus of providing a frugal cloud file system that efficiently uses the blocks of different types of storage devices with different properties for different purposes. The efficient use of the different types of available storage devices reduces the storage and bandwidth overhead. Advantageously, the reduction in storage and bandwidth overhead achieved using the frugal cloud file system reduces the economic costs of running the file system while maintaining high performance.
    Type: Grant
    Filed: January 12, 2012
    Date of Patent: October 21, 2014
    Assignee: Alcatel Lucent
    Inventors: Krishna P. Puttaswamy Naga, Thyagarajan Nandagopal
  • Patent number: 8856438
    Abstract: A disk drive is disclosed that utilizes an additional address mapping layer between logical addresses used by a host system and physical locations in the disk drive. Physical locations configured to store metadata information can be excluded from the additional address mapping layer. As a result, a reduced size translation table can be maintained by the disk drive. Improved performance, reduced costs, and improved security can thereby be attained.
    Type: Grant
    Filed: December 9, 2011
    Date of Patent: October 7, 2014
    Assignee: Western Digital Technologies, Inc.
    Inventors: Nicholas M. Warner, Marcus A. Carlson, David C. Pruett
  • Patent number: 8856435
    Abstract: A method, apparatus and computer program product for an external, self-initializing FIFO containing indexes of free CAM memory locations is presented. When data is sent to the CAM for a lookup, this external FIFO provides the CAM with the index of a free memory location within the CAM so that if the data word is not found in the CAM (i.e. a CAM miss), the data can be written to the designated available free entry in the CAM. Thus, if the same data word is searched in the CAM in the following cycle it will result in a hit.
    Type: Grant
    Filed: October 25, 2007
    Date of Patent: October 7, 2014
    Assignee: Oracle America, Inc.
    Inventors: Milton H. Shih, Robert J. Weisenbach
  • Patent number: 8856425
    Abstract: A method for performing meta block management is provided. The method is applied to a controller of a Flash memory having multiple channels, where the Flash memory includes a plurality of blocks respectively corresponding to the channels. The method includes: utilizing a meta block mapping table to store block grouping relationships respectively corresponding to a plurality of meta blocks, where blocks in each meta block respectively correspond to the channels; and when it is detected that a specific block corresponding to a specific channel within a meta block does not have remaining space for programming, according to the meta block mapping table, utilizing at least one blank block corresponding to the specific channel within at least one other meta block as extension of the specific block, for use of further programming. An associated memory device and a controller thereof are also provided.
    Type: Grant
    Filed: July 12, 2011
    Date of Patent: October 7, 2014
    Assignee: Silicon Motion Inc.
    Inventor: Yang-Chih Shen
  • Patent number: 8850115
    Abstract: A memory package and methods for writing data to and reading data from the memory package are presented. The memory package includes a volatile memory and a high-density memory. Data is written to the memory package at a bandwidth and latency associated with the volatile memory. A directory map associates a volatile memory address with data in the high-density memory. A copy of the directory map is stored in the high-density memory. The methods allow writing to and reading from the memory package using a first memory read/write interface (e.g. DRAM interface, etc.), though data is stored in a device of a different memory type (e.g. FLASH, etc.).
    Type: Grant
    Filed: April 3, 2012
    Date of Patent: September 30, 2014
    Assignee: International Business Machines Corporation
    Inventor: Robert B. Tremaine
  • Publication number: 20140281352
    Abstract: A mechanism is described for facilitating dynamic and efficient binary translation-based translation lookaside buffer prefetching according to one embodiment. A method of embodiments, as described herein, includes translating code blocks into code translation blocks at a computing device. The code translation blocks are submitted for execution. The method may further include tracking, in runtime, dynamic system behavior of the code translation blocks, and inferring translation lookaside buffer (TLB) prefetching based on the analysis of the tracked dynamic system behavior.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Inventors: Girish Venkatsubramanian, Ethan Schuchman
  • Publication number: 20140281351
    Abstract: A processing device implementing stride-based translation lookaside buffer (TLB) prefetching with adaptive offset is disclosed. A processing device of the disclosure includes a data prefetcher to generate a data prefetch address based on a linear address, a stride, or a prefetch distance, the data prefetch address associated with a data prefetch request, and a TLB prefetch address computation component to generate a TLB prefetch address based on the linear address, the stride, the prefetch distance, or an adaptive offset. The processing device also includes a cross page detection component to determine that the data prefetch address or the TLB prefetch address cross a page boundary associated with the linear address, and cause a TLB prefetch request to be written to a TLB request queue, the TLB prefetch request for translation of an address of a linear page number (LPN) based on the data prefetch address or the TLB prefetch address.
    Type: Application
    Filed: March 13, 2013
    Publication date: September 18, 2014
    Inventors: Jaroslaw Topp, Pedro Lopez, Fernando Latorre, Demos Pavlou, Thang Vu
  • Patent number: 8838915
    Abstract: The present invention may provide a computer system including a plurality of tiles divided into multiple virtual domains. Each tile may include a router to communicate with others of said tiles, a private cache to store data, and a spill table to record pointers for data evicted from the private cache to a remote host, wherein the remote host and the respective tile are provided in the same virtual domain. The spill tables may allow for faster retrieval of previously evicted data because the home registry does not need to be referenced if requested data is listed in the spill table. Therefore, embodiments of the present invention may provide a distance-aware cache collaboration architecture without incurring extraneous overhead expenses.
    Type: Grant
    Filed: June 29, 2012
    Date of Patent: September 16, 2014
    Assignee: Intel Corporation
    Inventors: Ahmad Samih, Ren Wang, Christian Maciocco, Tsung-Yuan C. Tai
  • Patent number: 8838922
    Abstract: An exemplary computer system includes a server module including a first processor and first memory, a storage module including a second processor, a second memory and a storage device, and a transfer module. The transfer module retrieves a first transfer list including an address of a first storage area, which is set on the first memory for a read command, from the server module. The transfer module retrieves a second transfer list including an address of a second storage area in the second memory, in which data corresponding to the read command read from the storage device is stored temporarily, from the storage module. The transfer module sends the data corresponding to the read command in the second storage area to the first storage area by controlling the data transfer between the second storage area and the first storage area based on the first and second transfer lists.
    Type: Grant
    Filed: October 3, 2013
    Date of Patent: September 16, 2014
    Assignee: Hitachi, Ltd.
    Inventors: Yuki Kondoh, Isao Ohara
  • Patent number: 8832383
    Abstract: A cache entry replacement unit can delay replacement of more valuable entries by replacing less valuable entries. When a miss occurs, the cache entry replacement unit can determine a cache entry for replacement (“a replacement entry”) based on a generic replacement technique. If the replacement entry is an entry that should be protected from replacement (e.g., a large page entry), the cache entry replacement unit can determine a second replacement entry. The cache entry replacement unit can “skip” the first replacement entry by replacing the second replacement entry with a new entry, if the second replacement entry is an entry that should not be protected (e.g., a small page entry). The first replacement entry can be skipped a predefined number of times before the first replacement entry is replaced with a new entry.
    Type: Grant
    Filed: May 20, 2013
    Date of Patent: September 9, 2014
    Assignee: International Business Machines Corporation
    Inventors: Bret R. Olszewski, Basu Vaidyanathan, Steven W. White
  • Publication number: 20140237212
    Abstract: A method, an apparatus, and a non-transitory computer readable medium for tracking prefetches generated by a stride prefetcher are presented. Responsive to a prefetcher table entry for an address stream locking on a stride, prefetch suppression logic is updated and prefetches from the prefetcher table entry are suppressed when suppression is enabled for that prefetcher table entry. A stride is a difference between consecutive addresses in the address stream. A prefetch request is issued from the prefetcher table entry when suppression is not enabled for that prefetcher table entry.
    Type: Application
    Filed: February 21, 2013
    Publication date: August 21, 2014
    Applicant: ADVANCED MICRO DEVICES, INC.
    Inventors: Alok Garg, Sharad Bade, John Kalamatianos
  • Patent number: 8788784
    Abstract: A method and device for storing and reading/writing a composite document are disclosed. The method includes: an initial storing area is pre-allocated for an inner controlling stream of the composite document and the initial storing area is continuous sectors or sector clusters; the inner controlling stream is stored in the initial storing area. The patches of a user data stream and the inner controlling stream in the composite document are reduced using the method or device. Correspondingly, pre-allocating storing area makes the probability of continuously storing the user data stream and the inner controlling stream in the composite document increased. The I/O can be optimized by introducing a strategy of reading cache and writing in a batch size, which can improve the efficiency of reading and writing.
    Type: Grant
    Filed: June 29, 2012
    Date of Patent: July 22, 2014
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Libo Deng, Yi Chen
  • Patent number: 8782344
    Abstract: A cache layer leverages a logical address space and storage metadata of a storage layer (e.g., storage layer) to cache data of a backing store. The cache layer maintains access metadata to track data characteristics of logical identifiers in the logical address space, including accesses pertaining to data that is not in the cache. The access metadata may be separate and distinct from the storage metadata maintained by the storage layer. The cache layer determines whether to admit data into the cache using the access metadata. Data may be admitted into the cache when the data satisfies cache admission criteria, which may include an access threshold and/or a sequentiality metric. Time-ordered history of the access metadata is used to identify important/useful blocks in the logical address space of the backing store that would be beneficial to cache.
    Type: Grant
    Filed: January 12, 2012
    Date of Patent: July 15, 2014
    Assignee: Fusion-io, Inc.
    Inventors: Nisha Talagala, Swaminathan Sundararaman, Amar Mudrankit
  • Patent number: 8782374
    Abstract: Methods and apparatus for inclusion of TLB (translation look-aside buffer) in processor micro-op caches are disclosed. Some embodiments for inclusion of TLB entries have micro-op cache inclusion fields, which are set responsive to accessing the TLB entry. Inclusion logic may the flush the micro-op cache or portions of the micro-op cache and clear corresponding inclusion fields responsive to a replacement or invalidation of a TLB entry whenever its associated inclusion field had been set. Front-end processor state may also be cleared and instructions refetched when replacement resulted from a TLB miss.
    Type: Grant
    Filed: December 2, 2008
    Date of Patent: July 15, 2014
    Assignee: Intel Corporation
    Inventors: Lihu Rappoport, Chen Koren, Franck Sala, Oded Lempel, Ido Ouziel, Ron Gabor, Gregory Pribush, Lior Libis
  • Publication number: 20140195771
    Abstract: In a particular embodiment, a method of anticipatorily loading a page of memory is provided. The method may include, during execution of first program code using a first page of memory, collecting data for at least one attribute of the first page of memory, including collecting data about at least one next page of memory that interacts with the first page of memory for a historical topology attribute of the first page of memory. The method may also include, during execution of second program code using the first page of memory, determining a second page of memory to anticipatorily load based on the historical topology attribute of the first page of memory.
    Type: Application
    Filed: January 4, 2013
    Publication date: July 10, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Shawn A. Adderly, Paul A Niekrewicz, Aydin Suren, Sebastian T. Ventrone
  • Publication number: 20140195772
    Abstract: Apparatuses, systems, and a method for providing a processor architecture with data prefetching are described. In one embodiment, a system includes one or more processing units that include a first type of in-order pipeline to receive at least one data prefetch instruction. The one or more processing units include a second type of in-order pipeline having issues slots to receive instructions and a data prefetch queue to receive the at least one data prefetch instruction. The data prefetch queue may issue the at least one data prefetch instruction to the second type of in-order pipeline based upon one or more factors (e.g., at least one execution slot of the second type of in-order pipeline being available, priority of the data prefetch instruction).
    Type: Application
    Filed: December 20, 2011
    Publication date: July 10, 2014
    Applicant: INTEL CORPORATION
    Inventor: James Earl McCormick, JR.
  • Patent number: 8775772
    Abstract: Methods and apparatus for enhanced READ and WRITE operations in a FLASH-based solid state storage system that includes a logical to physical translation table where the logical to physical translation table can include entries associating a logical block address with one or more data identifiers, where each data identifier is associated with a data string.
    Type: Grant
    Filed: December 21, 2009
    Date of Patent: July 8, 2014
    Assignee: International Business Machines Corporation
    Inventors: James A. Fuxa, Lance W. Shelton, Justin C. Haggard
  • Patent number: 8775776
    Abstract: A hash table method and structure comprises a processor that receives a plurality of access requests for access to a storage device. The processor performs a plurality of hash processes on the access requests to generate a first number of addresses for each access request. Such addresses are within a full address range. Hash table banks are operatively connected to the processor. The hash table banks form the storage device. Each of the hash table banks has a plurality of input ports. Specifically, each of the hash table banks has less input ports than the first number of addresses for each access request. The processor provides the addresses to the hash table banks, and each of the hash table banks stores pointers corresponding to a different limited range of addresses within the full address range (each of the different limited range of addresses is less than the full address range).
    Type: Grant
    Filed: January 18, 2012
    Date of Patent: July 8, 2014
    Assignee: International Business Machines Corporation
    Inventors: Bulent Abali, John J. Reilly