Directories And Tables (e.g., Dlat, Tlb) Patents (Class 711/205)
  • Patent number: 10482029
    Abstract: Techniques for obtaining metadata may include: receiving, by a director, an I/O operation directed to a target offset of a logical device, wherein the director is located on a board including a local page table used by components on the board; querying the local page table for a global memory address of first metadata for the target offset of the logical device; and responsive to the local page table not having the global memory address of the first metadata for the target offset of the logical device, using at least a first indirection layer to obtain the global memory address of the first metadata. The global memory may be a distributed global memory including memory segments from multiple different boards each including its own local page table. Compare and swap operations may be used to perform atomic operations to ensure synchronized access when updating the distributed global memory.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: November 19, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Andrew Chanler, Kevin Tobin
  • Patent number: 10460419
    Abstract: A hierarchical acceleration structure may be built for graphics processing using a 32 bit format. In one embodiment, the acceleration structure may be a k-d tree, but other acceleration structures may be used as well. 64 bit offsets are only used when 64 bit offsets are needed.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: October 29, 2019
    Assignee: Intel Corporation
    Inventors: Alexey Soupikov, Maxim Yurevich Shevtsov, Alexander Reshetov
  • Patent number: 10452566
    Abstract: One embodiment of the present invention includes a memory management unit (MMU) that is configured to efficiently process requests to access memory that includes protected regions. Upon receiving an initial request via a virtual address (VA), the MMU translates the VA to a physical address (PA) based on page table entries (PTEs) and gates the response based on page-specific secure state information. To thwart software-based attempts to illicitly access the protected regions, the secure state information is not stored in page tables. However, to expedite subsequent requests, after the MMU identifies the PTE and the corresponding secure state information, the MMU stores both the PTE and the secure state information as a cache line in a translation lookaside buffer.
    Type: Grant
    Filed: October 2, 2015
    Date of Patent: October 22, 2019
    Assignee: NVIDIA CORPORATION
    Inventors: Steven E. Molnar, James Leroy Deming, Michael A. Woodmansee
  • Patent number: 10452290
    Abstract: In one implementation, a method includes maintaining a list of available allocation units across a plurality of flash devices of a flash storage system, wherein the flash devices map erase blocks as directly addressable storage, and wherein erase blocks are categorized by the flash storage system as available for use, in use, or unusable, and wherein at least a portion of an erase block can be assigned as an allocation unit. The method further includes receiving data from a plurality of sources, wherein the data is associated with processing a dataset, the dataset comprising multiple file systems and associated metadata. The method further includes determining a plurality of subsets of the data such that each subset is capable of being written in parallel with the remaining subsets, mapping each subset of the plurality of subsets to an available allocation unit, and writing the plurality of subsets in parallel.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: October 22, 2019
    Assignee: Pure Storage, Inc.
    Inventors: Peter E. Kirkpatrick, Ronald Karr
  • Patent number: 10423804
    Abstract: Techniques are disclosed relating to securely storing data in a computing device. In one embodiment, a computing device includes a secure circuit configured to maintain key bags for a plurality of users, each associated with a respective one of the plurality of users and including a first set of keys usable to decrypt a second set of encrypted keys for decrypting data associated with the respective user. The secure circuit is configured to receive an indication that an encrypted file of a first of the plurality of users is to be accessed and use a key in a key bag associated with the first user to decrypt an encrypted key of the second set of encrypted keys. The secure circuit is further configured to convey the decrypted key to a memory controller configured to decrypt the encrypted file upon retrieval from a memory.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: September 24, 2019
    Assignee: Apple Inc.
    Inventors: Wade Benson, Conrad Sauerwald, Mitchell D. Adler, Michael Brouwer, Timothee Geoghegan, Andrew R. Whalley, David P. Finkelstein, Yannick L. Sierra
  • Patent number: 10338962
    Abstract: A system and method of using metrics to control throttling and swapping in a message processing system is provided. A workload status of a message processing system is determined, and the system polls for a new message according to the workload status. The message processing system identifies a blocked instance and calculates an expected idle time for the blocked instance. The system dehydrates the blocked instance if the expected idle time exceeds a predetermined threshold.
    Type: Grant
    Filed: October 10, 2016
    Date of Patent: July 2, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yossi Levanoni, Sanjib Saha, Bimal Kumar Mehta, Paul Maybee, Lee B. Graber, Balasubramanian Sriram, Eldar Azerovich Musayev, Kevin Bowen Smith
  • Patent number: 10255197
    Abstract: A system for generating predictions for a hardware table walk to find a map of a given virtual address to a corresponding physical address is disclosed. The system includes a plurality memories, which each includes respective plurality of entries, each of which includes a prediction of a particular one of a plurality of buffers which includes a portion of a virtual to physical address translation map. A first circuit may generate a plurality of hash values to retrieve a plurality of predictions from the plurality of memories, where each has value depends on a respective address and information associated with a respective thread. A second circuit may select a particular prediction of the retrieved predictions to use based on a history of previous predictions.
    Type: Grant
    Filed: July 20, 2016
    Date of Patent: April 9, 2019
    Assignee: Oracle International Corporation
    Inventors: John Pape, Manish Shah, Gideon Levinsky, Jared Smolens
  • Patent number: 10216644
    Abstract: According to one embodiment, a memory system includes a nonvolatile first memory, a second memory which has a buffer, and a memory controller. The memory controller manages a plurality of pieces of translation information. In a case where the plurality of pieces of translation information include a first plurality of pieces of translation information, the memory controller caches first translation information among the first plurality of pieces of translation information and does not cache second translation information among the first plurality of pieces of translation information. The first plurality of pieces of translation information linearly correlates a plurality of continuous physical addresses with a plurality of continuous logical addresses.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: February 26, 2019
    Assignee: Toshiba Memory Corporation
    Inventors: Shunichi Igahara, Toshikatsu Hida, Mitsunori Tadokoro
  • Patent number: 10198463
    Abstract: In accordance with embodiments, there are provided mechanisms and methods for appending data to large data volumes in a multi-tenant store. These mechanisms and methods for appending data to large data volumes can enable embodiments to provide more reliable and faster maintenance of changing data. In an embodiment and by way of example, a method for appending data to large data volumes is provided. The method embodiment includes receiving new data for a database. The new data is written to a temporary log. The size of the log is compared to a threshold. Then the log is written to a data store, if the size of the log is greater than the threshold.
    Type: Grant
    Filed: December 7, 2010
    Date of Patent: February 5, 2019
    Assignee: salesforce.com, inc.
    Inventors: Bill C. Eidson, Simon Z. Fell
  • Patent number: 10169039
    Abstract: A computer processor that implements pre-translation of virtual addresses is disclosed. The computer processor may include a register file comprising one or more registers. The computer processor may include processing logic. The processing logic may receive a value to store in a register of one or more registers. The processing logic may store the value in the register. The processing logic may designate the received value as a virtual address, the virtual address having a corresponding virtual base page number. The processing logic may translate the virtual base page number to a corresponding real base page number and zero or more real page numbers corresponding to zero or more virtual page numbers adjacent to the virtual base page number. The processing logic may further store in the register of the one or more registers the real base page number and the zero or more real page numbers.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: January 1, 2019
    Assignee: OPTIMUM SEMICONDUCTOR TECHNOLOGIES, INC.
    Inventors: Mayan Moudgill, Gary Nacer, C. John Glossner, A. Joseph Hoane, Paul Hurtley, Murugappan Senthilvelan, Pablo Balzola
  • Patent number: 10163187
    Abstract: A hierarchical acceleration structure may be built for graphics processing using a 32 bit format. In one embodiment, the acceleration structure may be a k-d tree, but other acceleration structures may be used as well. 64 bit offsets are only used when 64 bit offsets are needed.
    Type: Grant
    Filed: October 30, 2009
    Date of Patent: December 25, 2018
    Assignee: Intel Corproation
    Inventors: Alexei Soupikov, Maxim Y. Shevtsov, Alexander V. Reshetov
  • Patent number: 10108550
    Abstract: Methods, systems, and apparatus for receiving a request to access, from a main memory, data contained in a first portion of a first page of data, the first page of data having a first page size; initiating a page fault based on determining that the first page of data is not stored in the main memory; allocating a portion of the main memory equivalent to the first page size; transferring the first portion of the first page of data from the secondary memory to the allocated portion of the main memory without transferring the entire first page of data; and updating a first page table entry associated with the first portion of the first page of data to point to a location of the allocated portion of the main memory to which the first portion of the first page of data is transferred.
    Type: Grant
    Filed: September 22, 2016
    Date of Patent: October 23, 2018
    Assignee: Google LLC
    Inventors: Joel Dylan Coburn, Albert Borchers, Christopher Lyle Johnson, Robert S. Sprinkle
  • Patent number: 10102143
    Abstract: A data processing system 2 includes an address translation cache 12 to store a plurality of address translation entries. Eviction control circuitry 10 selects a victim entry for eviction from address translation cache 12 using an eviction control parameter. The address translation cache 12 can store multiple different types of entry corresponding to respective different levels of address translation within a multiple-level page table walk. The different types of entry have different eviction control parameters assigned at the time of allocation. Eviction from the address translation cache is dependent upon the entry type, as well as the subsequent accesses to the entry concerned and the other entries within the address translation cache.
    Type: Grant
    Filed: October 14, 2016
    Date of Patent: October 16, 2018
    Assignee: ARM Limited
    Inventors: Barry Duane Williamson, Michael Filippo, . Abhishek Raja, Adrian Montero, Miles Robert Dooley
  • Patent number: 10061775
    Abstract: A method, system and a computer program product for managing file system memory includes a module configured to implement a separate replacement policy and a separate index for a persistent second level adaptive replacement cache (L2ARC) logically part of a first level ARC. The system also includes a module configured to cluster compressed chunks of data on multiple physical devices via aligning the clusters of data chunks on a byte boundary basis on each of the devices. The method additionally includes a module configured to create a storage pool allocator (SPA) to track the compressed and packed chunks on the multiple devices via an attached active page and attached multiple closed pages. The method further includes re-adding an evicted data from the L2ARC to an active page to be written again thereto based on a configurable threshold number of hits to data in the L2ARC via an L2ARC hit counter.
    Type: Grant
    Filed: June 17, 2017
    Date of Patent: August 28, 2018
    Assignee: HGST, Inc.
    Inventors: Shailendra Tripathi, Daniel McGregor, Enyew Tan
  • Patent number: 9959044
    Abstract: A memory device includes a first storage unit storing an address mapping table, and a control unit coupled to the first storage unit and including a second storage unit storing a risky mapping table and a cached mapping table. The control unit is configured to: write data into the first storage unit; update mapping information associated with the data in the risky mapping table; and store mapping information in the cached mapping table into the address mapping table.
    Type: Grant
    Filed: May 3, 2016
    Date of Patent: May 1, 2018
    Assignee: Macronix International Co., Ltd.
    Inventors: Ting-Yu Liu, Nai-Ping Kuo, Yi-Chun Liu, Jian-Shing Liu
  • Patent number: 9921918
    Abstract: Systems and methods are provided to manage a storage object in a data backup storage mechanism, which stores multiple versions of a data file received from a data source. To efficiently manage storage in the storage object, determinations may be made as to whether a number of free data blocks (i.e., data blocks available for re-use) of the storage object exceeds a threshold and whether a data block(s) of the data file corresponding to a valid data block(s) of the storage object has not been modified in at least a number of previous versions of the data file. Responsive to a result of one or both of these determinations, data in the valid data block(s) may be copied to unused data block(s) in another storage object, and the status of the valid data block(s) is updated to free data block(s) such that all blocks in the storage object are free data blocks.
    Type: Grant
    Filed: August 30, 2013
    Date of Patent: March 20, 2018
    Assignee: CA, Inc.
    Inventors: Venkata Subrahmanya Sarma Yellapragada, Vijaya Kumar Pothireddy, Umasankar Raju Yallamraju, Avi Khinvasara
  • Patent number: 9824023
    Abstract: A management method of a virtual-to-physical address translation system includes the following steps: providing a first storage space, wherein the first storage space includes a plurality of buffer entries; providing a second storage space, wherein the second storage space includes a plurality of translation entries, and the translation entries correspond to a plurality of translation indices; and when receiving a write instruction to write a first virtual-to-physical address translation into a specific buffer entry of the buffer entries, storing the first virtual-to-physical address translation in a write translation entry of the translation entries according to a first part of bits of a first virtual address corresponding to the first virtual-to-physical address translation, and storing the first virtual address and a write translation index corresponding to the write translation entry in the specific buffer entry.
    Type: Grant
    Filed: November 18, 2014
    Date of Patent: November 21, 2017
    Assignee: Realtek Semiconductor Corp.
    Inventor: Yen-Ju Lu
  • Patent number: 9785574
    Abstract: A system may include a memory that includes a plurality of pages, a processor, and a translation lookaside buffer (TLB) that includes a plurality of entries. The processor may be configured to access data from a subset of the plurality of pages dependent upon a first virtual address. The TLB may be configured to compare the first virtual address to respective address information included in each entry of the plurality of entries. The TLB may be further configured to add a new entry to the plurality of entries in response to a determination that the first virtual address fails to match the respective address information included in each entry of the plurality of entries. The new entry may include address information corresponding to at least two pages of the subset of the plurality pages.
    Type: Grant
    Filed: July 8, 2015
    Date of Patent: October 10, 2017
    Assignee: Oracle International Corporation
    Inventor: Yuan Chou
  • Patent number: 9767043
    Abstract: A method, a system and a computer-readable medium for writing to a cache memory are provided. The method comprises maintaining a write count associated with a set, the set containing a memory block associated with a physical block address. A mapping from a logical address to the physical address of the block is also maintained. The method shifts the mapping based on the value of the write count and writes data to the block based on the mapping.
    Type: Grant
    Filed: March 28, 2014
    Date of Patent: September 19, 2017
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Zhe Wang, Yuan Xie, Yi Xu, Junli Gu, Ting Cao
  • Patent number: 9740597
    Abstract: Approaches for more efficiently executing calls to native code from within a managed execution environment are described. The techniques involve attempting to execute a native call, such as a call to a C function from within Java code, using a single hardware transaction. Not only is the native code executed in a hardware transaction, but also various transitional operations needed for transitioning between managed execution mode and native execution mode. If the hardware transaction is successful, at least some of the operations that would normally be performed during transitions between modes may be omitted or simplified. If the hardware transaction is unsuccessful, the native calls may be performed as they normally would, outside of hardware transactions.
    Type: Grant
    Filed: March 9, 2015
    Date of Patent: August 22, 2017
    Assignee: Oracle International Corporation
    Inventors: John R. Rose, Victor Luchangco, David Dice
  • Patent number: 9575881
    Abstract: Systems, methods, and computer programs are disclosed for allocating memory in a portable computing device having a non-uniform memory architecture. One embodiment of a method comprises: receiving from a process executing on a first system on chip (SoC) a request for a virtual memory page, the first SoC electrically coupled to a second SoC via an interchip interface, the first SoC electrically coupled to a first local volatile memory device via a first high-performance bus and the second SoC electrically coupled to a second local volatile memory device via a second high-performance bus; determining a free physical page pair comprising a same physical address available on the first and second local volatile memory devices; and mapping the free physical page pair to a single virtual page address.
    Type: Grant
    Filed: December 4, 2014
    Date of Patent: February 21, 2017
    Assignee: QUALCOMM INCORPORATED
    Inventors: Stephen Arthur Molloy, Dexter Tamio Chun
  • Patent number: 9569322
    Abstract: A method for memory migration between addressing schemes, including: receiving a first request to access a first memory address and a second request to access a second memory address; comparing the first memory address and the second memory address with a barrier pointer referencing a barrier address and separating migrated addresses and un-migrated addresses; tagging the first request with a first tag indicative of the first addressing scheme in response to the first memory address being on an un-migrated side of the barrier address; tagging the second request with a second tag indicative of the second addressing scheme in response to the second memory address being on a migrated side of the barrier address; and sending the first request to a first memory controller unit (MCU) and the second request to a second MCU.
    Type: Grant
    Filed: March 31, 2015
    Date of Patent: February 14, 2017
    Assignee: Oracle International Corporation
    Inventors: Ali Vahidsafa, Connie Wai Mun Cheung
  • Patent number: 9558119
    Abstract: Main memory operation in a symmetric multiprocessing computer, the computer comprising one or more processors operatively coupled through a cache controller to at least one cache of main memory, the main memory shared among the processors, the computer further comprising input/output (‘I/O’) resources, including receiving, in the cache controller from an issuing resource, a memory instruction for a memory address, the memory instruction requiring writing data to main memory; locking by the cache controller the memory address against further memory operations for the memory address; advising the issuing resource of completion of the memory instruction before the memory instruction completes in main memory; issuing by the cache controller the memory instruction to main memory; and unlocking the memory address only after completion of the memory instruction in main memory.
    Type: Grant
    Filed: June 23, 2010
    Date of Patent: January 31, 2017
    Assignee: International Business Machines Corporation
    Inventors: Garrett M. Drapala, Pak-Kin Mak, Arthur J. O'Neill, Jr., Craig R. Walters
  • Patent number: 9547602
    Abstract: Presented systems and methods can facilitate efficient information storage and tracking operations, including translation look aside buffer operations. In one embodiment, the systems and methods effectively allow the caching of invalid entries (with the attendant benefits e.g., regarding power, resource usage, stalls, etc), while maintaining the illusion that the TLBs do not in fact cache invalid entries (e.g., act in compliance with architectural rules). In one exemplary implementation, an “unreal” TLB entry effectively serves as a hint that the linear address in question currently has no valid mapping. In one exemplary implementation, speculative operations that hit an unreal entry are discarded; architectural operations that hit an unreal entry discard the entry and perform a normal page walk, either obtaining a valid entry, or raising an architectural fault.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: January 17, 2017
    Assignee: NVIDIA CORPORATION
    Inventors: Alexander Klaiber, Guillermo Juan Rozas
  • Patent number: 9405700
    Abstract: Various methods and apparatus are described for communicating transactions between one or more initiator IP cores and one or more target IP cores coupled to an interconnect. A centralized Memory Management logic Unit (MMU) is located in the interconnect for virtualization and sharing of integrated circuit resources including target cores between the one or more initiator IP cores. A master translation look aside buffer (TLB) stores virtualization and sharing information in the entries of the master TLB. A set of two or more translation look aside buffers (TLBs) locally store virtualization and sharing information replicated from the master TLB. Logic in the MMU or other software updates the virtualization and sharing information replicated from the master TLB in the entries of one or more of the set of local TLBs.
    Type: Grant
    Filed: November 3, 2011
    Date of Patent: August 2, 2016
    Assignee: Sonics, Inc.
    Inventor: Drew E. Wingard
  • Patent number: 9405713
    Abstract: The functional circuitry of a network flow processor is partitioned into a number of rectangular islands. The islands are disposed in rows. A configurable mesh data bus extends through the islands. A first island includes a first memory and a first data bus interface. A second island includes a processor, a second memory, and a second data bus interface. The processor can issue a command for a target memory to do an action. If a field in the command has a first value then the target memory is the first memory, whereas if the field has a second value then the target memory is in the second memory. The command format is the same, regardless of whether the target memory is local or remote. If the target memory is remote, then a data bus bridge adds destination information before putting the command onto the global configurable mesh data bus.
    Type: Grant
    Filed: February 17, 2012
    Date of Patent: August 2, 2016
    Assignee: Netronome Systems, Inc.
    Inventors: Gavin J. Stark, Steven W. Zagorianakos
  • Patent number: 9330020
    Abstract: Detailed herein are systems, apparatuses, and methods for transparent page level instruction translation. Exemplary embodiments include an instruction translation lookaside buffer (iTLB), wherein each iTLB entry includes a linear address of a page in memory, a physical address of the page in memory, and a remapping indicator.
    Type: Grant
    Filed: December 27, 2013
    Date of Patent: May 3, 2016
    Assignee: Intel Corporation
    Inventors: Paul Caprioli, Vedvyas Shanbhogue, Koichi Yamada
  • Patent number: 9323534
    Abstract: A method includes determining, for a first thread of execution, a first speculative decoded operands signal and determining, for a second thread of execution, a second speculative decoded operands signal. The method further includes determining, for the first thread of execution, a first constant and determining, for the second thread of execution, a second constant. The method further compares the first speculative decoded operands signal to the second speculative decoded operands signal and uses the first and second constant to detect a wordline collision for accessing the memory array.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: April 26, 2016
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Ravindraraj Ramaraju, Kathryn C. Stacer
  • Patent number: 9319491
    Abstract: Methods and systems for a more efficient transmission of network traffic are provided. According to one embodiment, payload data originated by a user process running on a host processor of a network device is fetched by an interface of the network device by performing direct virtual memory addressing of a user memory space of a system memory of the network device on behalf of a network interface unit of the network device. The direct virtual memory addressing maps physical addresses of various portions of the payload data to corresponding virtual addresses. The payload data is segmented by the network interface unit across one or more packets.
    Type: Grant
    Filed: January 14, 2016
    Date of Patent: April 19, 2016
    Assignee: Fortinet, Inc.
    Inventors: Xu Zhou, David Chen, Lin Huang, Guansong Zhang
  • Patent number: 9189417
    Abstract: A method includes performing a speculative tablewalk. The method includes performing a tablewalk to determine an address translation for a speculative operation and determining whether the speculative operation has been upgraded to a non-speculative operation concurrently with performing the tablewalk. An apparatus is provided that includes a load-store unit to maintain execution operations. The load-store unit includes a tablewalker to perform a tablewalk and includes an input indicative of the operation being speculative or non-speculative as well as a state machine to determine actions performed during the tablewalk based on the input. The apparatus also includes a translation look-aside buffer. Computer readable storage devices for performing the methods and adapting a fabrication facility to manufacture the apparatus are provided.
    Type: Grant
    Filed: November 8, 2012
    Date of Patent: November 17, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: David A. Kaplan, Stephen P. Thompson
  • Patent number: 9081706
    Abstract: The disclosed embodiments provide techniques for reducing address-translation latency and the serialization latency of combined TLB and data cache misses in a coherent shared-memory system. For instance, the last-level TLB structures of two or more multiprocessor nodes can be configured to act together as either a distributed shared last-level TLB or a directory-based shared last-level TLB. Such TLB-sharing techniques increase the total amount of useful translations that are cached by the system, thereby reducing the number of page-table walks and improving performance. Furthermore, a coherent shared-memory system with a shared last-level TLB can be further configured to fuse TLB and cache misses such that some of the latency of data coherence operations is overlapped with address translation and data cache access latencies, thereby further improving the performance of memory operations.
    Type: Grant
    Filed: May 10, 2012
    Date of Patent: July 14, 2015
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Pranay Koka, Michael O. McCracken, Herbert D. Schwetman, Jr., David A. Munday
  • Patent number: 9069690
    Abstract: In an embodiment, a page miss handler includes paging caches and a first walker to receive a first linear address portion and to obtain a corresponding portion of a physical address from a paging structure, a second walker to operate concurrently with the first walker, and a logic to prevent the first walker from storing the obtained physical address portion in a paging cache responsive to the first linear address portion matching a corresponding linear address portion of a concurrent paging structure access by the second walker. Other embodiments are described and claimed.
    Type: Grant
    Filed: September 13, 2012
    Date of Patent: June 30, 2015
    Assignee: Intel Corporation
    Inventors: Gur Hildesheim, Chang Kian Tan, Robert S. Chappell, Rohit Bhatia
  • Patent number: 9032145
    Abstract: A memory device includes an address protection system that facilitates the ability of the memory device to interface with a plurality of processors operating in a parallel processing manner. The protection system is used to prevent at least some of a plurality of processors in a system from accessing addresses designated by one of the processors as a protected memory address. Until the processor releases the protection, only the designating processor can access the memory device at the protected address. If the memory device contains a cache memory, the protection system can alternatively or additionally be used to protect cache memory addresses.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: May 12, 2015
    Assignee: Micron Technology, Inc.
    Inventor: David Resnick
  • Patent number: 9015400
    Abstract: A computer system and a method are provided that reduce the amount of time and computing resources that are required to perform a hardware table walk (HWTW) in the event that a translation lookaside buffer (TLB) miss occurs. If a TLB miss occurs when performing a stage 2 (S2) HWTW to find the PA at which a stage 1 (S1) page table is stored, the MMU uses the IPA to predict the corresponding PA, thereby avoiding the need to perform any of the S2 table lookups. This greatly reduces the number of lookups that need to be performed when performing these types of HWTW read transactions, which greatly reduces processing overhead and performance penalties associated with performing these types of transactions.
    Type: Grant
    Filed: March 5, 2013
    Date of Patent: April 21, 2015
    Assignee: QUALCOMM Incorporated
    Inventors: Thomas Zeng, Azzedine Touzni, Tzung Ren Tzeng, Phil J. Bostley
  • Patent number: 9009386
    Abstract: A system includes a memory device including a real memory and a tracking mechanism configured to track relationships between multiple virtual memory addresses and real memory. The system further includes a processor configured to perform the below method and/or execute the below computer program product. One method includes mapping a first virtual memory address to a real memory in a memory device and mapping a second virtual memory address to the real memory. Here, the first virtual memory address is authorized to modify data in the real memory and the second virtual memory address is not authorized to modify the data in the real memory. One computer storage medium includes a computer program product for performing the above method.
    Type: Grant
    Filed: December 13, 2010
    Date of Patent: April 14, 2015
    Assignee: International Business Machines Corporation
    Inventors: Brian D. Hatfield, Wenjeng Ko, Lei Liu
  • Patent number: 9009446
    Abstract: The disclosed embodiments provide a system that uses broadcast-based TLB-sharing techniques to reduce address-translation latency in a shared-memory multiprocessor system with two or more nodes that are connected by an electrical interconnect. During operation, a first node receives a memory operation that includes a virtual address. Upon determining that one or more TLB levels of the first node will miss for the virtual address, the first node uses the electrical interconnect to broadcast a TLB request to one or more additional nodes of the shared-memory multiprocessor in parallel with scheduling a speculative page-table walk for the virtual address. If the first node receives a TLB entry from another node of the shared-memory multiprocessor via the electrical interconnect in response to the TLB request, the first node cancels the speculative page-table walk. Otherwise, if no response is received, the first node instead waits for the completion of the page-table walk.
    Type: Grant
    Filed: August 2, 2012
    Date of Patent: April 14, 2015
    Assignee: Oracle International Corporation
    Inventors: Pranay Koka, David A. Munday, Michael O. McCracken, Herbert D. Schwetman, Jr.
  • Patent number: 9003161
    Abstract: A first virtual memory address is mapped to a real memory in a memory device, and a second virtual memory address is mapped to the real memory. Here, the first virtual memory address is authorized to modify data in the real memory and the second virtual memory address is not authorized to modify the data in the real memory.
    Type: Grant
    Filed: June 11, 2012
    Date of Patent: April 7, 2015
    Assignee: International Business Machines Corporation
    Inventors: Brian D. Hatfield, Wenjeng Ko, Lei Liu
  • Publication number: 20150082000
    Abstract: A memory management unit comprises an address translation unit that receives a memory access request as a virtual address and translates the virtual address to a physical address. A translation lookaside buffer stores page descriptors of a plurality of physical addresses, the address translation unit determining whether a page descriptor of a received virtual address is present in the translation lookaside buffer. A prefetch buffer stores page descriptors of the plurality of physical addresses. The address translation unit, in the event the page descriptor of the received virtual address is not present in the translation lookaside buffer, further determines whether the page descriptor of the received virtual address is present in the prefetch buffer; updates the translation lookaside buffer with the page descriptor in response to the determination; and performs a translation of the virtual address to a physical address using the page descriptor.
    Type: Application
    Filed: August 19, 2014
    Publication date: March 19, 2015
    Inventors: Sung-Min Hong, Sim Ji Lee, JaeYoung Hur, JiWoong Kwon, Il Park, Jong-Jin Lee, Jinyong Jung
  • Publication number: 20150058592
    Abstract: A chip multiprocessor includes a plurality of cores each having a translation lookaside buffer (TLB) and a prefetch buffer (PB). Each core is configured to determine a TLB miss on the core's TLB for a virtual page address and determine whether or not there is a PB hit on a PB entry in the PB for the virtual page address. If it is determined that there is a PB hit, the PB entry is added to the TLB. If it is determined that there is not a PB hit, the virtual page address is used to perform a page walk to determine a translation entry, the translation entry is added to the TLB and the translation entry is prefetched to each other one of the plurality of cores.
    Type: Application
    Filed: October 3, 2014
    Publication date: February 26, 2015
    Inventors: Abhishek Bhattacharjee, Margaret Martonosi
  • Patent number: 8966221
    Abstract: A lookup operation is performed in a translation look aside buffer based on a first translation request as current translation request, wherein a respective absolute address is returned to a corresponding requestor for the first translation request as translation result in case of a hit. A translation engine is activated to perform at least one translation table fetch in case the current translation request does not hit an entry in the translation look aside buffer, wherein the translation engine is idle waiting for the at least one translation table fetch to return data, reporting the idle state of the translation engine as lookup under miss condition and accepting a currently pending translation request as second translation request, wherein a lookup under miss sequence is performed in the translation look aside buffer based on said second translation request.
    Type: Grant
    Filed: June 21, 2011
    Date of Patent: February 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Ute Gaertner, Thomas Koehler
  • Patent number: 8959302
    Abstract: An exemplary computer system includes a server module including a first processor and first memory, a storage module including a second processor, a second memory and a storage device, and a transfer module. The transfer module retrieves a first transfer list including an address of a first storage area, which is set on the first memory for a read command, from the server module. The transfer module retrieves a second transfer list including an address of a second storage area in the second memory, in which data corresponding to the read command read from the storage device is stored temporarily, from the storage module. The transfer module sends the data corresponding to the read command in the second storage area to the first storage area by controlling the data transfer between the second storage area and the first storage area based on the first and second transfer lists.
    Type: Grant
    Filed: August 7, 2014
    Date of Patent: February 17, 2015
    Assignee: Hitachi, Ltd.
    Inventors: Yuki Kondoh, Isao Ohara
  • Patent number: 8954435
    Abstract: A method for storage reclamation in a shared storage device. The method includes executing a distributed computer system having a plurality of file systems accessing storage on a shared storage device, and initiating a reclamation operation by using a reclamation agent that accesses the shared storage device. The method further includes reading the file system data structure that represent unallocated storage blocks of one of the plurality of file systems that will undergo a reclamation operation. A plurality of I/O resources that are used to provide I/O to the unallocated storage blocks are then interrupted. Storage from the unallocated storage blocks is then reclaimed, and normal operation of the I/O resources that are used to provide I/O to the unallocated storage blocks is resumed.
    Type: Grant
    Filed: April 22, 2011
    Date of Patent: February 10, 2015
    Assignee: Symantec Corporation
    Inventors: Kedar Shrikrishna Patwardhan, Anirban Mukherjee, Kirubakaran Kaliannan
  • Patent number: 8954697
    Abstract: A system configures page tables to cause an operating system to copy original page data in a data store when any one of the application processes makes a first write request for the original page data. The system detects a page fault from a memory management unit receiving a first write request from one of the application processes and creates the copy in physical memory to allow the application process to modify the page data copy. The other application processes have read access to the original page data. The system replaces the original page data in the data store with the page data copy in response to receiving a first synchronization request from the application process and updates a page table for one of the other application processes to configure access to the replaced page data in response to receiving a second synchronization request from the one other application process.
    Type: Grant
    Filed: August 5, 2010
    Date of Patent: February 10, 2015
    Assignee: Red Hat, Inc.
    Inventors: Neil R. T. Horman, Eric L. Paris, Jeffrey T. Layton
  • Patent number: 8954648
    Abstract: The invention provides a memory device. In one embodiment, the memory device comprises a flash memory, a memory, and a controller. The flash memory comprises a plurality of blocks for data storage. The memory stores an address mapping table recording relationships between logical addresses and physical addresses of the blocks therein. The controller divides the address mapping table stored in the memory to a plurality of mapping table units, updates relationships between the logical addresses and the physical addresses stored in the mapping table units, determines whether data access performed to the flash memory fulfills the conditions of a first specific requirement, and when the data access fulfills the conditions of the first requirement, the controller selects a target mapping table unit from the mapping table units, and stores the target mapping table unit and a corresponding time stamp as a mapping table unit data to the flash memory.
    Type: Grant
    Filed: July 11, 2011
    Date of Patent: February 10, 2015
    Assignee: Via Technologies, Inc.
    Inventors: Liang Chen, Chen Xiu
  • Patent number: 8938602
    Abstract: A first processing unit and a second processing unit can access a system memory that stores a common page table that is common to the first processing unit and the second processing unit. The common page table can store virtual memory addresses to physical memory addresses mapping for memory chunks accessed by a job of an application. A page entry, within the common page table, can include a first set of attribute bits that defines accessibility of the memory chunk by the first processing unit, a second set of attribute bits that defines accessibility of the same memory chunk by the second processing unit, and physical address bits that define a physical address of the memory chunk.
    Type: Grant
    Filed: August 2, 2012
    Date of Patent: January 20, 2015
    Assignee: QUALCOMM Incorporated
    Inventors: Colin Christopher Sharp, Thomas Andrew Sartorius
  • Patent number: 8930649
    Abstract: A method begins by a dispersed storage (DS) processing module concurrently receiving a first data stream and a second data stream for transmission to a receiving entity. The method continues with the DS processing module segmenting each of the first and second data streams to produce a first plurality of data segments and a second plurality of data segments, dividing one of the first plurality of data segments into a first plurality of data blocks, and dividing one of the second plurality of data segments into a second plurality of data blocks. The method continues with the DS processing module creating a data matrix from the first and second plurality of data blocks and generating a coded matrix from the data matrix and an encoding matrix. The method continues with the DS processing module outputting one or more pairs of coded values of the coded matrix to the receiving entity.
    Type: Grant
    Filed: August 2, 2012
    Date of Patent: January 6, 2015
    Assignee: Cleversafe, Inc.
    Inventors: Gary W. Grube, Timothy W. Markison
  • Patent number: 8924684
    Abstract: Approaches are described for reducing the number of memory address cache (e.g. TLB) flushes that need to be performed during the course of performing virtualized I/O. A device driver residing in a host domain registers a CPU that will be used for I/O processing and requests the hypervisor to pre-allocate a number of slots in the page tables to map memory pages during I/O operations. Upon receiving an I/O operation, when memory needs to be mapped, the driver provides the hypervisor with information about the registered CPU. The hypervisor uses the pre-allocated page table slots to create the new mapping and flushes the TLB cache corresponding to the CPU that will perform the I/O. The TLB cache belonging to other CPUs may not need to be flushed. The host driver ensures that the mapped memory page is used exclusively on the CPU or performs additional TLB flushes.
    Type: Grant
    Filed: June 13, 2012
    Date of Patent: December 30, 2014
    Assignee: Amazon Technologies, Inc.
    Inventor: Pradeep Vincent
  • Patent number: 8914609
    Abstract: A computing device includes an interface, memory, and a processing module. The memory stores a directory and inode tables. The directory stores a file identifier and a corresponding inumber for each file that is stored in storage units. An inode table stores an inumber, metadata, and a DSN address for each file stored in a corresponding storage unit. The processing module is operable to monitor, for each of the inode tables, utilization of the memory. The processing module is further operable to monitor, for each of the storage units, utilization of memory of the storage units. The processing module is further operable to process, for the inode table and/or the corresponding storage unit, per inode table memory utilization data and per storage unit memory utilization data to adjust memory utilization of the inode table and/or memory utilization of the corresponding storage unit.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: December 16, 2014
    Assignee: Cleversafe, Inc.
    Inventors: Jason K. Resch, Gary W. Grube, S. Christopher Gladwin
  • Patent number: 8909851
    Abstract: A method of operation of a storage control system including: providing a memory controller; accessing a volatile memory table by the memory controller; writing a non-volatile semiconductor memory for persisting changes in the volatile memory table; and restoring a logical-to-physical table in the volatile memory table, after a power cycle, by restoring a random access memory with a logical-to-physical partition from a most recently used list.
    Type: Grant
    Filed: February 8, 2012
    Date of Patent: December 9, 2014
    Assignee: Smart Storage Systems, Inc.
    Inventors: Ryan Jones, Robert W. Ellis, Joseph Taylor
  • Patent number: 8904123
    Abstract: A virtual logical unit that stores learning metadata is allocated in a first storage server having a first plurality of clusters, wherein the learning metadata indicates a type of storage device in which selected data of the first plurality of clusters of the first storage server are stored. A copy services command is received to copy the selected data from the first storage server to a second storage server having a second plurality of clusters. The virtual logical unit that stores the learning metadata is copied, from the first storage server to the second storage server, via the copy services command. Selected logical units corresponding to the selected data are copied from the first storage server to the second storage server, and the learning metadata is used to place the selected data in the type of storage device indicated by the learning metadata.
    Type: Grant
    Filed: June 25, 2013
    Date of Patent: December 2, 2014
    Assignee: International Business Machines Corporation
    Inventors: Joshua J. Crawford, Benjamin J. Donie, Andreas B. Koster