Directory Tables (e.g., Dlat, Tlb) Patents (Class 711/207)
  • Patent number: 8612691
    Abstract: A mechanism for assigning memory to on-chip cache coherence domains assigns caches within a processing unit to coherence domains. The mechanism assigns chunks of memory to the coherence domains. The mechanism monitors applications running on cores within the processing unit to identify needs of the applications. The mechanism may then reassign memory chunks to the cache coherence domains based on the needs of the applications running in the coherence domains. When a memory controller receives the cache miss, the memory controller may look up the address in a lookup table that maps memory chunks to cache coherence domains. Snoop requests are sent to caches within the coherence domain. If a cache line is found in a cache within the coherence domain, the cache line is returned to the originating cache by the cache containing the cache line either directly or through the memory controller.
    Type: Grant
    Filed: April 24, 2012
    Date of Patent: December 17, 2013
    Assignee: International Business Machines Corporation
    Inventors: William E. Speight, Lixin Zhang
  • Patent number: 8612719
    Abstract: Techniques for optimizing data movement in electronic storage devices are disclosed. In one particular exemplary embodiment, the techniques may be realized as a method for optimizing data movement in electronic storage devices comprising maintaining, on the electronic storage device, a data structure associating virtual memory addresses with physical memory addresses. Information can be provided regarding the data structure to a host which is in communication with the electronic storage device. Commands can be received from the host to modify the data structure on the electronic storage device, and the data structure can be modified in response to the received command.
    Type: Grant
    Filed: May 29, 2012
    Date of Patent: December 17, 2013
    Assignee: STEC, Inc.
    Inventors: Tony Digaleh Givargis, Mohammad Reza Sadri
  • Patent number: 8612720
    Abstract: A system and method for implementation of MMU assisted data breakpoints for any number of data structures within a program application are provided. For each data structure for which a data breakpoint is desired, two distinct MMU entries are created. One MMU entry has access attributes. The other entry has an interrupt triggering sub-entry. According to the preferred embodiment, access to the second MMU entry causes a page fault.
    Type: Grant
    Filed: February 9, 2007
    Date of Patent: December 17, 2013
    Assignee: Edgewater Computer Systems, Inc.
    Inventor: Alvin Sim
  • Patent number: 8607026
    Abstract: A translation lookaside buffer (TLB) is disclosed formed using RAM and synthesisable logic circuits. The TLB provides logic within the synthesisable logic for pairing down a number of memory locations that must be searched to find a translation to a physical address from a received virtual address. The logic provides a hashing circuit for hashing the received virtual address and uses the hashed virtual address to index the RAM to locate a line within the RAM that provides the translation.
    Type: Grant
    Filed: November 17, 2011
    Date of Patent: December 10, 2013
    Assignee: Nytell Software LLC
    Inventors: Paulus Stravers, Jan-Willem van de Waerdt
  • Patent number: 8607024
    Abstract: An embodiment provides a virtual address cache memory including: a TLB virtual page memory configured to, when a rewrite to a TLB occurs, rewrite entry data; a data memory configured to hold cache data using a virtual page tag or a page offset as a cache index; a cache state memory configured to hold a cache state for the cache data stored in the data memory, in association with the cache index; a first physical address memory configured to, when the rewrite to the TLB occurs, rewrite a held physical address; and a second physical address memory configured to, when the cache data is written to the data memory after the occurrence of the rewrite to the TLB, rewrite a held physical address.
    Type: Grant
    Filed: December 1, 2010
    Date of Patent: December 10, 2013
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Kenta Yasufuku, Shigeaki Iwasa, Yasuhiko Kurosawa, Hiroo Hayashi, Seiji Maeda, Mitsuo Saito
  • Patent number: 8601235
    Abstract: A shared memory management system and method are described. In one embodiment, a memory management system includes a memory management unit for concurrently managing memory access requests from a plurality of engines. The shared memory management system independently controls access to the context memory without interference from other engine activities. In one exemplary implementation, the memory management unit tracks an identifier for each of the plurality of engines making a memory access request. The memory management unit associates each of the plurality of engines with particular translation information respectively. This translation information is specified by a block bind operation. In one embodiment the translation information is stored in a portion of instance memory. A memory management unit can be non-blocking and can also permit a hit under miss.
    Type: Grant
    Filed: December 30, 2009
    Date of Patent: December 3, 2013
    Assignee: Nvidia Corporation
    Inventors: David B. Glasco, John S. Montrym, Lingfeng Yuan
  • Patent number: 8601234
    Abstract: The disclosure includes a method and system of configuring a translation lookaside buffer (TLB). In an embodiment, the TLB includes a first portion and a second portion. The first portion or the second portion may be selectively disabled in response to a value of a TLB configuration indicator.
    Type: Grant
    Filed: November 7, 2007
    Date of Patent: December 3, 2013
    Assignee: QUALCOMM Incorporated
    Inventors: Erich James Plondke, Ajay Anant Ingle, Lucian Codrescu, Paul Bassett
  • Patent number: 8601233
    Abstract: A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system.
    Type: Grant
    Filed: October 23, 2012
    Date of Patent: December 3, 2013
    Assignee: Intel Corporation
    Inventors: Steven M. Bennett, Andrew V. Anderson, Gilbert Neiger, Richard A. Uhlig, Scott Dion Rodgers, Rajesh M. Sankaran, Camron B. Rust, Sebastian Schoenberg
  • Publication number: 20130318323
    Abstract: An apparatus and method are described for coupling a front end core to an accelerator component (e.g., such as a graphics accelerator). For example, an apparatus is described comprising: an accelerator comprising one or more execution units (EUs) to execute a specified set of instructions; and a front end core comprising a translation lookaside buffer (TLB) communicatively coupled to the accelerator and providing memory access services to the accelerator, the memory access services including performing TLB lookup operations to map virtual to physical addresses on behalf of the accelerator and in response to the accelerator requiring access to a system memory.
    Type: Application
    Filed: March 30, 2012
    Publication date: November 28, 2013
    Inventors: Eliezer Weissmann, Karthikeyan Karthik Vaithianathan, Yoav Zach, Boris Ginzburg, Ronny Ronen
  • Patent number: 8595465
    Abstract: Some of the embodiments of the present disclosure provide a method for predicting, for a first virtual address, a first descriptor based at least in part on the one or more past descriptors associated with one or more past virtual addresses; and determining, for the first virtual address, a first physical address based at least in part on the predicted first descriptor. Other embodiments are also described and claimed.
    Type: Grant
    Filed: September 8, 2010
    Date of Patent: November 26, 2013
    Assignee: Marvell Israel (M.I.S.L) Ltd.
    Inventor: Moshe Raz
  • Patent number: 8595464
    Abstract: Methods and mechanisms for operating a translation lookaside buffer (TLB). A translation lookaside buffer (TLB) includes a plurality of segments, each segment including one or more entries. A control unit is coupled to the TLB. The control unit is configured to determine utilization of segments, and dynamically disable segments in response to determining that segments are under-utilized. The control unit is also configured to dynamically enable segments responsive to determining a given number of segments are over-utilized.
    Type: Grant
    Filed: July 14, 2011
    Date of Patent: November 26, 2013
    Assignee: Oracle International Corporation
    Inventors: Gideon N. Levinsky, Manish K. Shah
  • Patent number: 8589658
    Abstract: Extended translation look-aside buffers (eTLB) for converting virtual addresses into physical addresses are presented, the eTLB including, a physical memory address storage having a number of physical addresses, a virtual memory address storage configured to store a number of virtual memory addresses corresponding with the physical addresses, the virtual memory address storage including, a set associative memory structure (SAM), and a content addressable memory (CAM) structure; and comparison circuitry for determining whether a requested address is present in the virtual memory address storage, wherein the eTLB is configured to receive an index register for identifying the SAM structure and the CAM structure, and wherein the eTLB is configured to receive an entry register for providing a virtual page number corresponding with the plurality of virtual memory addresses.
    Type: Grant
    Filed: December 19, 2011
    Date of Patent: November 19, 2013
    Assignee: NetLogic Microsystems, Inc.
    Inventors: Gaurav Singh, Daniel Chen, Dave Hass
  • Patent number: 8589657
    Abstract: An approach is provided in a hypervised computer system where a page table request is at an operating system running in the hypervised computer system. The operating system determines whether the page table request requires the hypervisor to process. If the determination reveals that the page table request requires the hypervisor, then the hypervisor is used to handle the request. However, if the determination reveals that the page table request does not require the hypervisor, then an indicator included in a page table entry corresponding to the request is read to determine if the page table entry is controlled by the operating system or the hypervisor. The operating system is able to update the page table entry if the indicator identifies the page table entry as being operating system controlled.
    Type: Grant
    Filed: January 4, 2011
    Date of Patent: November 19, 2013
    Assignee: International Business Machines Corporation
    Inventors: Bradly George Frey, Michal Ostrowski, Andrew Henry Wottreng
  • Patent number: 8583893
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, manage metadata for virtual volumes. In some implementations, a method and system include defining multiple metadata blocks in a persistent storage, including information that links a virtual address space to the storage system, where the defining includes, for at least one of the multiple metadata blocks, determining multiple output addresses corresponding to the storage system, and writing the multiple output addresses and an identifier corresponding to the multiple metadata blocks in a metadata block in the persistent storage. In some implementations, a method and system include reading the multiple metadata blocks into the memory from the persistent storage, including identifying the metadata block based on the identifier; receiving an input address of the virtual address space; and obtaining a corresponding output address to the storage system using the multiple metadata blocks in the memory.
    Type: Grant
    Filed: May 26, 2010
    Date of Patent: November 12, 2013
    Assignee: Marvell World Trade Ltd.
    Inventors: Arvind Pruthi, Shailesh P. Parulekar, Mayur Shardul
  • Patent number: 8578126
    Abstract: An alignment data structure is used to map a logical data block start address to a physical data block start address dynamically, to service a client data access request. A separate alignment data structure can be provided for each data object managed by the storage system. Each such alignment data structure can be stored in, or referenced by a pointer in, the inode of the corresponding data object. A consequence of the mapping is that certain physical storage medium regions are not mapped to any logical data blocks. These unmapped regions may be visible only to the file system layer and layers that reside between the file system layer and the mass storage subsystem. They can be used, if desired, to store system information, i.e., information that is not visible to any storage client.
    Type: Grant
    Filed: October 29, 2009
    Date of Patent: November 5, 2013
    Assignee: NetApp, Inc.
    Inventors: Shravan Gaonkar, Rahul Iyer, Deepak Kenchammana
  • Patent number: 8566564
    Abstract: A method for caching attribute data for matching attributes with physical addresses. The method includes storing a plurality of attribute entries in a memory, wherein the memory is configured to provide at least one attribute entry when accessed with a physical address, and wherein the attribute entry provided describes characteristics of the physical address.
    Type: Grant
    Filed: December 13, 2012
    Date of Patent: October 22, 2013
    Inventors: H. Peter Anvin, Guillermo J. Rozas, Alexander Klaiber, John P. Banning
  • Patent number: 8566479
    Abstract: Disclosed are a method of and a system for enabling a program running on a logical partition, of a logically partitioned data processing system, to access directly resources of the data processing system. The method comprising the steps of, said program transforming a first address for a resource of a specific type on the data processing system, to a second address, within an address space allocated to said logical partition; and said program using said second address space to access a resource of said specific type allocated to said logical partition. In this way, the present invention may be used to enable a program running within a partition's address space to access IO devices directly, thus avoiding the overhead of making a hypervisor call.
    Type: Grant
    Filed: October 20, 2005
    Date of Patent: October 22, 2013
    Assignee: International Business Machines Corporation
    Inventor: Antonisamy Arokkia Rajendran
  • Patent number: 8566511
    Abstract: A solid-state storage device with multi-level addressing is provided. The solid-state storage device includes a plurality of flash memory devices, a volatile memory, and a controller. The controller is configured to store data received from a host in the plurality of flash memory devices in response to a write command and to read the data stored in the plurality of flash memory devices in response to a read command. The controller is further configured to maintain a multi-level address table that maps logical addresses received from the host identifying the data stored in the plurality of flash memory devices to physical addresses in the plurality of flash memory devices containing the data. A first level of the multi-level address table is maintained by the controller in the volatile memory and second and third levels of the multi-level address table are maintained by the controller in the plurality of flash memory devices.
    Type: Grant
    Filed: July 23, 2010
    Date of Patent: October 22, 2013
    Assignee: STEC, Inc.
    Inventors: Mohammadali Tootoonchian, Mark Moshayedi
  • Patent number: 8566558
    Abstract: To efficiently manage data including control information. A storage apparatus connected to a host requesting data writing includes one or a plurality of storage devices and a controller for allocating a storage area in page units to an area of a virtual volume to write the data in response to a request from the host to write the data wherein if the data regarding which the host makes the write request includes control information and data excluding the control information is specified data, the controller releases the allocation of the page allocated to the area for writing the relevant data.
    Type: Grant
    Filed: July 22, 2011
    Date of Patent: October 22, 2013
    Assignee: Hitachi, Ltd.
    Inventors: Junichi Muto, Isamu Kurokawa, Ran Ogata, Kazue Jindo
  • Publication number: 20130275704
    Abstract: A remote processor is signaled for receiving a remote machine memory address (RMMA) space that contains data to be transferred. The RMMA space is mapped to a free portion of a system memory address (SMA) space of the remote processor. The entries of a page table corresponding to the address space are created.
    Type: Application
    Filed: February 6, 2013
    Publication date: October 17, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: INTERNATIONAL BUSINESS MACHINES CORPORATION
  • Publication number: 20130275716
    Abstract: A program execution device includes a program loader reading a machine language program including a machine language code and access frequency information; an address conversion table creator creating an address conversion table including entries, each of which indicates a relation between a logical address range and a physical address range; and a TLB register registering, in a TLB, an entry of the address conversion table storing a logical address range accessed according to the machine language code. When determining that the frequency of access to a logical address range is high based on the access frequency information, the address conversion table creator adjusts the size of an entry storing this logical address range to an appropriate size.
    Type: Application
    Filed: June 7, 2013
    Publication date: October 17, 2013
    Inventor: Yoshitaka NISHIDA
  • Patent number: 8555028
    Abstract: A method of causing an information processing apparatus to execute, the method including: performing a management procedure to accept addresses of respective page tables generated for each of operation modes from an operating system that manages the virtual address space and to associate the addresses with the operating system to be recorded in page table correspondence information storage; executing a control procedure to set a second access right indicating a value lower than the first access right in accordance with the operation mode of the operating system; and processing a processing procedure to cause the memory management device to execute a flush of a translation look-aside buffer, and to set the second access right indicating a value for validating the first access right, wherein the memory management device performs a control on the memory access while the second access right is prioritized over the first access right.
    Type: Grant
    Filed: August 29, 2011
    Date of Patent: October 8, 2013
    Assignee: Fujitsu Limited
    Inventors: Naoki Nishiguchi, Noboru Iwamatsu
  • Publication number: 20130262798
    Abstract: One embodiment of the present invention includes a method for maintaining a shadow page table in at least partial correspondence with guest page mappings of a guest computation. The method marking with a traced write indication at least those entries of the shadow page table that map physical memory locations which themselves encode the guest page mappings, the marking identifying, for a hardware facility, a subset of memory access targets for which updates are to be recorded in a guest write buffer accessible to the virtualization system. Responsive to a coherency-inducing operation of the guest computation, the method reads from the guest write buffer and introduces corresponding updates into the shadow page table.
    Type: Application
    Filed: May 14, 2013
    Publication date: October 3, 2013
    Applicant: VMware, Inc.
    Inventors: Keith ADAMS, Sahil RIHAN
  • Publication number: 20130262816
    Abstract: Some implementations disclosed herein provide techniques and arrangements for an specialized logic engine that includes translation lookaside buffer to support multiple threads executing on multiple cores. The translation lookaside buffer enables the specialized logic engine to directly access a virtual address of a thread executing on one of the plurality of processing cores. For example, an acceleration compute engine may receive one or more instructions from a thread executed by a processing core. The acceleration compute engine may retrieve, based on an address space identifier associated with the one or more instructions, a physical address associated with the one or more instructions from the translation lookaside buffer to execute the one or more instructions using the physical address.
    Type: Application
    Filed: December 30, 2011
    Publication date: October 3, 2013
    Inventors: Ronny Ronen, Boris Ginzburg, Eliezer Weissmann, Karthikeyan Vaithianathan
  • Publication number: 20130262815
    Abstract: Embodiments of the invention relate to hybrid address translation. An aspect of the invention includes receiving a first address, the first address referencing a location in a first address space. The computer searches a segment lookaside buffer (SLB) for a SLB entry corresponding to the first address; the SLB entry comprising a type field and an address field and determines whether a value of the type field in the SLB entry indicates a hashed page table (HPT) search or a radix tree search. Based on determining that the value of the type field indicates the HPT search, a HPT is searched to determine a second address, the second address comprising a translation of the first address into a second address space; and based on determining that the value of the type field indicates the radix tree search, a radix tree is searched to determine the second address.
    Type: Application
    Filed: March 4, 2013
    Publication date: October 3, 2013
    Applicant: International Business Machines Corporation
    Inventors: Anthony J. Bybell, Michael K. Gschwind
  • Patent number: 8549211
    Abstract: A method for providing hardware support for memory protection and virtual memory address translation for a virtual machine. The method includes executing a host machine application within a host machine context and executing a virtual machine application within a virtual machine context. A plurality of TLB (translation look aside buffer) entries for the virtual machine context and the host machine context are stored within a TLB. Memory protection bits for the plurality of TLB entries are logically combined to enforce memory protection on the virtual machine application.
    Type: Grant
    Filed: December 13, 2012
    Date of Patent: October 1, 2013
    Inventor: H. Peter Anvin
  • Patent number: 8547453
    Abstract: An image processing apparatus includes a plurality of video processors, a video memory for recording video data, and a plurality of ports. Each of the ports is connected between the video processor and the video memory, for accessing the video memory in response to supplied information. A plurality of memory map table units, each includes at least one table for being set with management information for managing a memory area of the video memory in which video data is recorded, for selectively supplying the management information set to the table to corresponding ports. A processing unit has a function of setting the management information to the table of the memory map table unit.
    Type: Grant
    Filed: April 7, 2008
    Date of Patent: October 1, 2013
    Assignee: Sony Corporation
    Inventors: Ken Mabuchi, Kazunori Yamaguchi
  • Patent number: 8549254
    Abstract: Embodiments of an invention for using a translation lookaside buffer to manage protected micro-contexts are disclosed. In one embodiment, an apparatus includes an interface and memory management logic. The interface is to perform a transaction to fetch information from a memory. The memory management logic is to translate an untranslated address to a memory address. The memory management logic includes a storage location, a series of translation stages, determination logic, and a translation lookaside buffer. The storage location is to store an address of a data structure for the first translation stage. Each of the translation stages includes translation logic to find an entry in a data structure based on a portion of the untranslated address. Each entry is to store an address of a different data structure for the first translation stage, an address of a data structure for a successive translation stage, or the physical address.
    Type: Grant
    Filed: December 31, 2007
    Date of Patent: October 1, 2013
    Assignee: Intel Corporation
    Inventor: Uday R. Savagaonkar
  • Patent number: 8543770
    Abstract: A mechanism is provided for assigning memory to on-chip cache coherence domains. The mechanism assigns caches within a processing unit to coherence domains. The mechanism then assigns chunks of memory to the coherence domains. The mechanism monitors applications running on cores within the processing unit to identify needs of the applications. The mechanism may then reassign memory chunks to the cache coherence domains based on the needs of the applications running in the coherence domains. When a memory controller receives the cache miss, the memory controller may look up the address in a lookup table that maps memory chunks to cache coherence domains. Snoop requests are sent to caches within the coherence domain. If a cache line is found in a cache within the coherence domain, the cache line is returned to the originating cache by the cache containing the cache line either directly or through the memory controller.
    Type: Grant
    Filed: May 26, 2010
    Date of Patent: September 24, 2013
    Assignee: International Business Machines Corporation
    Inventors: William E. Speight, Lixin Zhang
  • Patent number: 8543791
    Abstract: An apparatus for reducing a page fault rate in a virtual memory system includes a page table stored in a main storage unit which stores a reference address so as to read page information from the main storage unit; a buffer unit which stores a portion of the page table; and a processor which reads data from the main storage unit or which stores data in the main storage unit. When changing information for referring to a first page that exists in the page table, the processor performs a task invalidating information related to the first page in the buffer unit. A method of reducing the page fault rate includes resetting reference information stored in the page table; detecting whether the reference information exists in a buffer unit; and invalidating the reference information when the reference information exists in the buffer unit.
    Type: Grant
    Filed: October 25, 2006
    Date of Patent: September 24, 2013
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Il-hoon Shin, Ji-hyun In
  • Patent number: 8543792
    Abstract: A memory access technique, in accordance with one embodiment of the present invention, includes coalescing mappings between virtual memory and physical memory when a contiguous plurality of virtual pages map to a contiguous plurality of physical pages. Any of the coalesced mappings are sufficient to map all pages within the coalesced region. Accordingly, a memory subsystem can cache a single coalesced mapping and not all of them. The single cached coalesced mapping may be used to translate all of the virtual addresses to physical addresses for the corresponding contiguous memory space.
    Type: Grant
    Filed: September 19, 2006
    Date of Patent: September 24, 2013
    Assignee: Nvidia Corporation
    Inventors: David B. Glasco, Lingfeng Yuan
  • Patent number: 8539175
    Abstract: A virtual logical unit that stores learning metadata is allocated in a first storage server having a first plurality of clusters, wherein the learning metadata indicates a type of storage device in which selected data of the first plurality of clusters of the first storage server are stored. A copy services command is received to copy the selected data from the first storage server to a second storage server having a second plurality of clusters. The virtual logical unit that stores the learning metadata is copied, from the first storage server to the second storage server, via the copy services command. Selected logical units corresponding to the selected data are copied from the first storage server to the second storage server, and the learning metadata is used to place the selected data in the type of storage device indicated by the learning metadata.
    Type: Grant
    Filed: September 21, 2010
    Date of Patent: September 17, 2013
    Assignee: International Business Machines Corporation
    Inventors: Joshua James Crawford, Benjamin Jay Donie, Andreas Bernadrus Mattias Koster
  • Publication number: 20130238874
    Abstract: Systems and methods for accessing a unified translation lookaside buffer (TLB) are disclosed. A method includes receiving an indicator of a level one translation lookaside buffer (L1TLB) miss corresponding to a request for a virtual address to physical address translation, searching a cache that includes virtual addresses and page sizes that correspond to translation table entries (TTEs) that have been evicted from the L1TLB, where a page size is identified, and searching a second level TLB and identifying a physical address that is contained in the second level TLB. Access is provided to the identified physical address.
    Type: Application
    Filed: March 7, 2012
    Publication date: September 12, 2013
    Applicant: SOFT MACHINES, INC.
    Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
  • Publication number: 20130238875
    Abstract: A memory management unit can receive an address associated with a page size that is unknown to the MMU. The MMU can concurrently determine whether a translation lookaside buffer data array stores a physical address associated with the address based on different portions of the address, where each of the different portions is associated with a different possible page size. This provides for efficient translation lookaside buffer data array access when different programs, employing different page sizes, are concurrently executed at a data processing device.
    Type: Application
    Filed: March 8, 2012
    Publication date: September 12, 2013
    Applicant: FREESCALE SEMICONDUCTOR, INC.
    Inventors: Ravindraraj Ramaraju, Eric V. Fiene, Jogendra C. Sarker
  • Publication number: 20130232316
    Abstract: In one embodiment of the present invention, a method includes switching between a first address space and a second address space, determining if the second address space exists in a list of address spaces; and maintaining entries of the first address space in a translation buffer after the switching. In such manner, overhead associated with such a context switch may be reduced.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 5, 2013
    Inventors: Jason W. Brandt, Sanjoy K. Mondal, Richard A. Uhlig, Gilbert Neiger, Robert T. George
  • Patent number: 8527736
    Abstract: A computer system has a translation lookaside buffer (TLB) having a plurality of entries for mapping virtual memory addresses to physical memory addresses and logic configured to perform the following steps for an entry of the TLB: (a) selecting a TLB entry size for the entry; (b) determining whether a mapping for the entry is aligned with a boundary of a contiguous section of memory without overshooting an end of the contiguous section of memory, wherein the mapping is based on the TLB entry size and maps virtual memory addresses to physical memory addresses for a section of the memory consistent with the TLB entry size; (c) if the mapping is determined to be aligned with the boundary of the contiguous section of memory without overshooting the end of the contiguous section of memory, configuring the entry with the mapping written into the entry; and (d) repeating steps (a) through (c) until a mapping is found to be aligned with the boundary of the contiguous section of memory without overshooting the end of
    Type: Grant
    Filed: September 7, 2010
    Date of Patent: September 3, 2013
    Assignee: ADTRAN, Inc.
    Inventor: Coleman Bagwell
  • Publication number: 20130227245
    Abstract: Techniques are disclosed relating to integrated circuits that implement a virtual memory. In one embodiment, an integrated circuit is disclosed that includes a translation lookaside buffer configured to store non-prefetched translations and a translation table configured to store prefetched translations. In such an embodiment, the translation lookaside buffer and the translation table share table walk circuitry. In some embodiments, the table walk circuitry is configured to store a translation in the translation table in response to a prefetch request and without updating the translation lookaside buffer. In some embodiments, the translation lookaside buffer, the translation table, and table walk circuitry are included within a memory management unit configured to service memory requests received from a plurality of client circuits via a plurality of direct memory access (DMA) channels.
    Type: Application
    Filed: February 28, 2012
    Publication date: August 29, 2013
    Inventors: Rohit K. Gupta, Manu Gulati
  • Publication number: 20130227248
    Abstract: In a computer system having virtual machines, one or more unused bits of a guest virtual address range are allocated for aliasing so that multiple virtually addressed sub-pages can be mapped to a common memory page. When one bit is allocated for aliasing, sub-pages can be virtually addressed at a granularity that is one-half of a memory page. When M bits are allocated for aliasing, sub-pages can be virtually addressed at a granularity that is 1/(2M)-th of a memory page. The granularity of page sizes can be selected according to particular use cases. In the case of COW optimization, page sizes can be set statically between 4 KB and 2 MB or configured dynamically among multiple page sizes.
    Type: Application
    Filed: February 27, 2012
    Publication date: August 29, 2013
    Applicant: VMWARE, INC.
    Inventors: Bhavesh MEHTA, Benjamin C. SEREBRIN
  • Patent number: 8516220
    Abstract: A page table entry dirty bit system may be utilized to record dirty information for a software distributed shared memory system. In some embodiments, this may improve performance without substantially increasing overhead because the dirty bit recording system is already available in certain processors. By providing extra bits, coherence can be obtained with respect to all the other uses of the existing page table entry dirty bits.
    Type: Grant
    Filed: May 11, 2010
    Date of Patent: August 20, 2013
    Assignee: Intel Corporation
    Inventors: Shoumeng Yan, Ying Gao, Xiaocheng Zhou, Hu Chen, Sai Luo, Bratin Saha
  • Patent number: 8516221
    Abstract: An apparatus and method for coalescing TLB entries on-the-fly at virtual address translation time is disclosed. A search is made for a requested virtual address translation in the VHPT. Further searching is performed for additional VHPT entries meeting certain coalescing and compatibility criteria. The compatible VHPT entries are coalesced and stored in the TLB into a single combined TLB entry.
    Type: Grant
    Filed: October 31, 2008
    Date of Patent: August 20, 2013
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Greg Thelen
  • Patent number: 8510532
    Abstract: A method for making memory more reliable involves accessing data stored in a removable storage device by translating a logical memory address provided by a host digital device to a physical memory address in the device. A logical memory address is received from the host digital device. The logical memory address corresponds to a location of data stored on the removable storage device. A physical memory address corresponding to the local address is determined by accessing a lookup table corresponding to the logical zone.
    Type: Grant
    Filed: April 6, 2012
    Date of Patent: August 13, 2013
    Assignee: Imation Corp.
    Inventor: Arunprasad Ramiya Mothilal
  • Patent number: 8510507
    Abstract: There is provided a method and apparatus for implementing a virtual mirror of a primary storage device (106) on a secondary storage device (108). The method comprises providing a map (124a) for translating primary data storage locations on said primary storage device (106) to secondary data storage locations on said secondary storage device (106) and utilizing said map (124a) to enable data stored on said secondary storage device (108) to mirror data stored on said primary storage device (106). By providing such a method, the requirements of the primary and secondary disks (106, 108) can be decoupled such that a smaller secondary disk (108) could be used with a larger primary (106) which will not be filled to capacity. This reduces the unused capacity on the secondary disk (108) which would otherwise be wasted.
    Type: Grant
    Filed: January 21, 2010
    Date of Patent: August 13, 2013
    Assignee: Xyratex Technology Limited
    Inventor: Robert P. Rossi
  • Publication number: 20130205114
    Abstract: The method includes receiving an object operation from an application at a hardware device manager. The object operation includes an object identifier. The method includes performing the object operation directly on a storage device. A physical address for the object corresponding to the object identifier is mapped directly to the object identifier in an index managed by the hardware device manager.
    Type: Application
    Filed: March 15, 2013
    Publication date: August 8, 2013
    Applicant: FUSION-IO
    Inventor: FUSION-IO
  • Patent number: 8504797
    Abstract: In one embodiment, a method of operating block-based thin provisioning disk volumes in a system including a first storage system which is connected via a network to a second storage system comprises, in response to a volume creation request to create a thin provisioning disk volume in the first storage system, recording in the first storage system attribute information of the block-based thin provisioning disk volume; specifying a directory path for the block-based thin provisioning disk volume in a file system in the second storage system; and creating a directory for the block-based thin provisioning disk volume under the specified directory path.
    Type: Grant
    Filed: June 2, 2009
    Date of Patent: August 6, 2013
    Assignee: Hitachi, Ltd.
    Inventor: Yasuyuki Mimatsu
  • Patent number: 8504794
    Abstract: A memory management system and method are described. In one embodiment, a memory management system includes a memory management unit for virtualizing context memory storage and independently controlling access to the context memory without interference from other engine activities. The shared resource management unit overrides a stream of access denials (e.g., NACKs) associated with an access problem. The memory management system and method facilitate access to memory while controlling translation between virtual and physical memory “spaces”. In one embodiment the memory management system includes a translation lookaside buffer and a fill component. The translation lookaside buffer tracks information associating a virtual memory space with a physical memory space. The fill component tracks the status of an access request progress from a plurality of engines independently and faults that occur in attempting to access a memory space.
    Type: Grant
    Filed: November 1, 2006
    Date of Patent: August 6, 2013
    Assignee: Nvidia Corporation
    Inventors: David B. Glasco, John S. Montrym, Lingfeng Yuan, Robert C. Keller
  • Patent number: 8499117
    Abstract: A method for writing and reading data in memory cells, comprises the steps of: defining a virtual memory, defining write commands and read commands of data (DT) in the virtual memory, providing a first nonvolatile physical memory zone (A1), providing a second nonvolatile physical memory zone (A2), and, in response to a write command of an initial data, searching for a first erased location in the first memory zone, writing the initial data (DT1a) in the first location (PB1(DPP0)), and writing, in the metadata (DSC0) an information (DS(PB1)) allowing the first location to be found and an information (LPA, DS(PB1)) forming a link between the first location and the location of the data in the virtual memory.
    Type: Grant
    Filed: September 21, 2010
    Date of Patent: July 30, 2013
    Assignee: STMicroelectronics (Rousset) SAS
    Inventor: Hubert Rousseau
  • Publication number: 20130191594
    Abstract: A storage device is provided for direct memory access. A controller of the storage device performs a mapping of a window of memory addresses to a logical block addressing (LBA) range of the storage device. Responsive to receiving from a host a write request specifying a write address within the window of memory-addresses, the controller initializes a first memory buffer in the storage device and associates the first memory buffer with a first address range within the window of memory addresses such that the write address of the request is within the first address range. The controller writes to the first memory buffer based on the write address. Responsive to the buffer being full the controller persists contents of the first memory buffer to the storage device using logical block addressing based on the mapping.
    Type: Application
    Filed: March 13, 2013
    Publication date: July 25, 2013
    Applicant: International Business Machines Corporation
    Inventor: International Business Machines Corporation
  • Patent number: 8495313
    Abstract: A virtual logical unit that stores learning metadata is allocated in a first storage server having a first plurality of clusters, wherein the learning metadata indicates a type of storage device in which selected data of the first plurality of clusters of the first storage server are stored. A copy services command is received to copy the selected data from the first storage server to a second storage server having a second plurality of clusters. The virtual logical unit that stores the learning metadata is copied, from the first storage server to the second storage server, via the copy services command. Selected logical units corresponding to the selected data are copied from the first storage server to the second storage server, and the learning metadata is used to place the selected data in the type of storage device indicated by the learning metadata.
    Type: Grant
    Filed: May 3, 2012
    Date of Patent: July 23, 2013
    Assignee: International Business Machines Corporation
    Inventors: Joshua James Crawford, Benjamin Jay Donie, Andreas Bernardus Mattius Koster
  • Patent number: 8495334
    Abstract: A pattern matching accelerator (PMA) for assisting software threads to find the presence and location of strings in an input data stream that match a given pattern. The patterns are defined using regular expressions that are compiled into a data structure comprised of rules subsequently processed by the PMA. The patterns to be searched in the input stream are defined by the user as a set of regular expressions. The patterns to be searched are grouped in pattern context sets. The sets of regular expressions which define the pattern context sets are compiled to generate a rules structure used by the PMA hardware. The rules are compiled before search run time and stored in main memory, in rule cache memory within the PMA or a combination thereof. For each input character, the PMA executes the search and returns the search results.
    Type: Grant
    Filed: February 6, 2011
    Date of Patent: July 23, 2013
    Assignee: International Business Machines Corporation
    Inventors: Giora Biran, Christoph Hagleitner, Timothy Hume Heil, Jan Van Lunteren
  • Patent number: 8495318
    Abstract: Memory page management in a tiered memory system including a system that includes at least one page table for storing a plurality of entries, each entry associated with a page of memory and each entry including an address of the page and a memory tier of the page. The system also includes a control program configured for allocating pages associated with the entries to a software module, the allocated pages from at least two different memory tiers. The system further includes an agent of the control program capable of operating independently of the control program, the agent configured for receiving an authorization key to the allocated pages, and for migrating the allocated pages between the different memory tiers responsive to the authorization key.
    Type: Grant
    Filed: July 26, 2010
    Date of Patent: July 23, 2013
    Assignee: International Business Machines Corporation
    Inventors: Robert B. Tremaine, Robert W. Wisniewski