Directory Tables (e.g., Dlat, Tlb) Patents (Class 711/207)
  • Patent number: 10409726
    Abstract: Disclosed in some examples are methods, systems, and machine readable mediums that dynamically adjust the size of an L2P cache in a memory device in response to observed operational conditions. The L2P cache may borrow memory space from a donor memory location, such as a read or write buffer. For example, if the system notices a high amount of read requests, the system may increase the size of the L2P cache at the expense of the write buffer (which may be decreased). Likewise, if the system notices a high amount of write requests, the system may increase the size of the L2P cache at the expense of the read buffer (which may be decreased).
    Type: Grant
    Filed: October 30, 2017
    Date of Patent: September 10, 2019
    Assignee: Micron Technology, Inc.
    Inventor: Sebastien Andre Jean
  • Patent number: 10409500
    Abstract: One embodiment provides a memory controller. The memory controller includes logical block address (LBA) section defining logic to define a plurality of LBA sections for a memory device circuitry, each section including a range of LBAs, and each section including a unique indirection-unit (IU) granularity; wherein the IU granularity defines a physical region size of the memory device. The LBA section defining logic also to generate a plurality of logical-to-physical (L2P) tables to map a plurality of LBAs to physical locations of the memory device, each L2P table corresponding to an LBA section. The memory controller also includes LBA section notification logic to notify a file system of the plurality of LBA sections to enable the file system to issue a read and/or write command having an LBA based on an IU granularity associated with an LBA section.
    Type: Grant
    Filed: September 8, 2017
    Date of Patent: September 10, 2019
    Assignee: Intel Corporation
    Inventors: Sanjeev N. Trika, Peng Li, Jawad B. Khan
  • Patent number: 10409745
    Abstract: Interruption facility for adjunct processor queues. In response to a queue transitioning from a no replies pending state to a reply pending state, an interruption is initiated. This interruption signals to a processor that a reply to a request is waiting on the queue. In order for the queue to take advantage of the interruption capability, it is enabled for interruptions.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: September 10, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Charles W. Gainey, Jr., Klaus Meissner, Damian L. Osisek, Klaus Werner
  • Patent number: 10394558
    Abstract: Technical solutions are described for out-of-order (OoO) execution of one or more instructions by a processing unit includes receiving, by a load-store unit (LSU) of the processing unit, an OoO window of instructions including a plurality of instructions to be executed OoO, and issuing, by the LSU, instructions from the OoO window. The issuing includes selecting an instruction from the OoO window, the instruction using an effective address. Further, in response to the instruction being a load instruction, it is determined whether the effective address is present in an effective address directory (EAD). In response to the effective address being present in the EAD, the load instruction is issued using the effective address. Further, in response to the instruction being a store instruction, a real address mapped to the effective address is determined from an effective-real translation (ERT) table, and the store instruction is issued using the real address.
    Type: Grant
    Filed: October 6, 2017
    Date of Patent: August 27, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Christopher Gonzalez, Bryan Lloyd, Balaram Sinharoy
  • Patent number: 10387326
    Abstract: A computer-implemented method includes associating an initial use order with a plurality of target sets of a translation lookaside buffer (TLB), where the initial use order indicates an order of use of the plurality of target sets. The plurality of target sets are associated with an initial least-recently-used (LRU) state based on the initial use order. A new use order for the plurality of target sets is generated. Generating the new use order includes moving a first target set to a least-recently-used position, responsive to a purge of the first target set. The LRU state of the plurality of target sets is updated based on the new use order, responsive to the purge of the first target set. The first target set is identified as eligible for replacement according to an LRU replacement policy of the TLB, based at least in part on the purge of the first target set.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: August 20, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Uwe Brandt, Markus Helms, Thomas Köhler, Frank Lehnert
  • Patent number: 10380015
    Abstract: Apparatus, systems, methods, and computer program products are disclosed for logical address range mapping for storage devices. A system includes a set of non-volatile memory elements accessible using a set of physical addresses. A system includes a controller for a set of non-volatile memory elements. A controller is configured to maintain a hierarchical data structure comprising a plurality of levels for mapping logical addresses to a set of physical address. A controller is configured to receive an input/output (I/O) request. A controller is configured to translate a logical address for an I/O request to a physical address utilizing a largest mapped logical address range that includes the logical address in a hierarchical data structure. A level includes one or more mappings between logical address ranges and physical address ranges at a range size for the level.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: August 13, 2019
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Igor Genshaft, Marina Frid
  • Patent number: 10380031
    Abstract: Ensuring forward progress for nested translations in a memory management unit (MMU) including receiving a plurality of nested translation requests, wherein each of the plurality of nested translation requests requires at least one congruence class lock; detecting, using a congruence class scoreboard, a collision of the plurality of nested translation requests based on the required congruence class locks; quiescing, in response to detecting the collision of the plurality of nested translation requests, a translation pipeline in the MMU including switching operation of the translation pipeline from a multi-thread mode to a single-thread mode and marking a first subset of the plurality of nested translation requests as high-priority nested translation requests; and servicing the high-priority nested translation requests through the translation pipeline in the single-thread mode.
    Type: Grant
    Filed: November 27, 2017
    Date of Patent: August 13, 2019
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Jody B. Joyner, Jon K. Kriegel, Bradley Nelson, Charles D. Wait
  • Patent number: 10372618
    Abstract: An apparatus and method are provided for maintaining address translation data within an address translation cache. The address translation cache has a plurality of entries, where each entry is used to store address translation data used when converting a virtual address into a corresponding physical address of a memory system. Control circuitry is used to perform an allocation process to determine the address translation data to be stored in each entry. The address translation cache is used to store address translation data of a plurality of different types representing address translation data specified at respective different levels of address translation within a multiple-level page table walk. The plurality of different types comprises a final level type of address translation data that identifies a full translation from the virtual address to the physical address, and at least one intermediate level type of address translation data that identifies a partial translation of the virtual address.
    Type: Grant
    Filed: October 14, 2016
    Date of Patent: August 6, 2019
    Assignee: ARM Limited
    Inventors: Miles Robert Dooley, Abhishek Raja, Barry Duane Williamson, Huzefa Moiz Sanjeliwala
  • Patent number: 10360383
    Abstract: Systems and methods are disclosed for detecting high-level functionality of an application executing on a computing device. One method includes storing, in a secure memory, an application-specific virtual address mapping table for an application. The application-specific virtual address mapping table comprises a plurality of virtual address offsets in the application binary code mapped to corresponding target application functionalities. In response to launching the application, a process-specific virtual address mapping table is generated for an instance of an application process to be executed. The process-specific virtual address mapping table defines actual virtual addresses corresponding to the target application functionalities using the virtual address offsets in the application-specific virtual address mapping table.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: July 23, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Subrato Kumar De, Sajo Sunder George
  • Patent number: 10353831
    Abstract: Systems, apparatuses and methods may provide for verifying, from outside a trusted computing base of a computing system, an identity an enclave instance prior to the enclave instance being launched in the trusted computing base, determining a memory location of the enclave instance and confirming that the memory location is local to the computing system. In one example, the enclave instance is a proxy enclave instance, wherein communications are conducted with one or more additional enclave instances in the trusted computing base via the proxy enclave instance and an unencrypted channel.
    Type: Grant
    Filed: December 24, 2015
    Date of Patent: July 16, 2019
    Assignee: Intel Corporation
    Inventors: Scott H. Robinson, Ravi L. Sahita, Mark W. Shanahan, Karanvir S. Grewal, Nitin V. Sarangdhar, Carlos V. Rozas, Bo Zhang, Shanwei Cen
  • Patent number: 10331581
    Abstract: A high-performance computing system, method, and storage medium manage accesses to multiple memory modules of a computing node, the modules having different access latencies. The node allocates its resources into pools according to pre-determined memory access criteria. When another computing node requests a memory access, the node determines whether the request satisfies any of the criteria. If so, the associated pool of resources is selected for servicing the request; if not, a default pool is selected. The node then services the request if the pool of resources is sufficient. Otherwise, various error handling processes are performed. Each memory access criterion may relate to a memory address range assigned to a memory module, a type of request, a relationship between the nodes, a configuration of the requesting node, or a combination of these.
    Type: Grant
    Filed: April 10, 2017
    Date of Patent: June 25, 2019
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Frank R. Dropps, Michael E. Malewicki
  • Patent number: 10331883
    Abstract: A method, computer program product, and system for managing container security, the method including consuming a recipe queue on a first checker container, wherein the first checker container is on a first host of a computer system, and the recipe queue comprises a predefined set of rules, storing the first checker container recipe queue result in the first checker container, comparing the first checker container recipe queue result with an expected result of the recipe queue, wherein the expected result is stored in the first checker container, and following a first fail procedure from a plurality of fail procedures, based on the first checker container recipe queue result not matching the expected result.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: June 25, 2019
    Assignee: International Business Machines Corporation
    Inventors: Rafael Camarda Silva Folco, Breno Henrique Leitão, Rafael Peria de Sene
  • Patent number: 10324858
    Abstract: Access control circuitry comprises: a detector to detect a memory address translation between a virtual memory address in a virtual memory address space and a physical memory address in a physical memory address space, provided in response to a translation request by further circuitry; an address translation memory, to store data representing a set of physical memory addresses previously provided to the further circuitry in response to translation requests by the further circuitry; an interface to receive a physical memory address from the further circuitry for a memory access by the further circuitry; a comparator to compare a physical memory address received from the further circuitry with the set of physical addresses stored by the address translation memory, and to permit access, by the further circuitry, to a physical address included in the set of one or more physical memory addresses.
    Type: Grant
    Filed: June 12, 2017
    Date of Patent: June 18, 2019
    Assignee: ARM Limited
    Inventors: Bruce James Mathewson, Phanindra Kumar Mannava, Matthew Lucien Evans, Paul Gilbert Meyer, Andrew Brookfield Swaine
  • Patent number: 10318172
    Abstract: Cache operation in a multi-threaded processor uses a small memory structure referred to as a way enable table that stores an index to an n-way set associative cache. The way enable table includes one entry for each entry in the n-way set associative cache and each entry in the way enable table is arranged to store a thread ID. The thread ID in an entry in the way enable table is the ID of the thread associated with a data item stored in the corresponding entry in the n-way set associative cache. Prior to reading entries from the n-way set associative cache identified by an index parameter, the ways in the cache are selective enabled based on a comparison of the current thread ID and the thread IDs stored in entries in the way enable table which are identified by the same index parameter.
    Type: Grant
    Filed: October 1, 2015
    Date of Patent: June 11, 2019
    Assignee: MIPS Tech, LLC
    Inventor: Philip Day
  • Patent number: 10318435
    Abstract: Ensuring forward progress for nested translations in a memory management unit (MMU) including receiving a plurality of nested translation requests, wherein each of the plurality of nested translation requests requires at least one congruence class lock; detecting, using a congruence class scoreboard, a collision of the plurality of nested translation requests based on the required congruence class locks; quiescing, in response to detecting the collision of the plurality of nested translation requests, a translation pipeline in the MMU including switching operation of the translation pipeline from a multi-thread mode to a single-thread mode and marking a first subset of the plurality of nested translation requests as high-priority nested translation requests; and servicing the high-priority nested translation requests through the translation pipeline in the single-thread mode.
    Type: Grant
    Filed: August 22, 2017
    Date of Patent: June 11, 2019
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Jody B. Joyner, Jon K. Kriegel, Bradley Nelson, Charles D. Wait
  • Patent number: 10310974
    Abstract: Disclosed herein are systems and methods for isolating input/output computing resources. In some embodiments, a host device may include a processor and logic coupled with the processor, to identify a tag identifier (Tag ID) for a process or container of the host device. The Tag ID may identify a queue pair of a hardware device of the host device for an outbound transaction from the processor to the hardware device, to be conducted by the process or container. Logic may further map the Tag ID to a Process Address Space Identifier (PASID) associated with an inbound transaction from the hardware device to the processor that used the identified queue pair. The process or container may use the PASID to conduct the outbound transaction via the identified queue pair. Other embodiments may be disclosed and/or claimed.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: June 4, 2019
    Assignee: Intel Corporation
    Inventors: Cunming Liang, Edwin Verplank, David E. Cohen, Danny Zhou
  • Patent number: 10296465
    Abstract: A processor architecture utilizing a L3 translation lookaside buffer (TLB) to reduce page walks. The processor includes multiple cores, where each core includes a L1 TLB and a L2 TLB. The processor further includes a L3 TLB that is shared across the processor cores, where the L3 TLB is implemented in off-chip or die-stack dynamic random-access memory. Furthermore, the processor includes a page table connected to the L3 TLB, where the page table stores a mapping between virtual addresses and physical addresses. In such an architecture, by having the L3 TLB with a very large capacity, performance may be improved, such as execution time, by eliminating page walks, which requires multiple data accesses.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: May 21, 2019
    Assignee: Board of Regents, The University of Texas System
    Inventors: Lizy K. John, Jee Ho Ryoo, Nagendra Gulur
  • Patent number: 10289562
    Abstract: A computer-implemented method includes associating an initial use order with a plurality of target sets of a translation lookaside buffer (TLB), where the initial use order indicates an order of use of the plurality of target sets. The plurality of target sets are associated with an initial least-recently-used (LRU) state based on the initial use order. A new use order for the plurality of target sets is generated. Generating the new use order includes moving a first target set to a least-recently-used position, responsive to a purge of the first target set. The LRU state of the plurality of target sets is updated based on the new use order, responsive to the purge of the first target set. The first target set is identified as eligible for replacement according to an LRU replacement policy of the TLB, based at least in part on the purge of the first target set.
    Type: Grant
    Filed: June 15, 2017
    Date of Patent: May 14, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Uwe Brandt, Markus Helms, Thomas Köhler, Frank Lehnert
  • Patent number: 10255202
    Abstract: Various embodiments are generally directed to the providing for mutual authentication and secure distributed processing of multi-party data. In particular, an experiment may be submitted to include the distributed processing of private data owned by multiple distrustful entities. Private data providers may authorize the experiment and securely transfer the private data for processing by trusted computing nodes in a pool of trusted computing nodes.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: April 9, 2019
    Assignee: INTEL CORPORATION
    Inventors: Hormuzd M. Khosravi, Baiju V. Patel
  • Patent number: 10255209
    Abstract: Embodiments of the present invention disclose a method, computer program product, and system for determining statistics corresponding to data transfer operations. In one embodiment, the computer implemented method includes the steps of receiving a request from an input/output (I/O) device to perform a data transfer operation between the I/O device and a memory, generating an entry in an input/output memory management unit (IOMMU) corresponding to the data transfer operation, wherein the entry in the IOMMU includes at least an indication of a processor chip that corresponds to the memory of the data transfer operation, monitoring the data transfer operation between the I/O device and the memory, determining statistics corresponding to the monitored data transfer operation, wherein the determined statistics include at least: the I/O device that performed the data transfer operation, the processor chip that corresponds to the memory of the data transfer operation, and an amount of data transferred.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: April 9, 2019
    Assignee: International Business Machines Corporation
    Inventors: Srinivas Kotta, Mehulkumar J. Patel, Venkatesh Sainath, Vaidyanathan Srinivasan
  • Patent number: 10248575
    Abstract: Disclosed herein is a method for operating translation look-aside buffers, TLBs, in a multiprocessor system. A purge request is received for purging one or more entries in the TLB. When the thread doesn't require access to the entries to be purged the execution of the purge request at the TLB may start. When an address translation request is rejected due to the TLB purge, a suspension time window may be set. During the suspension time window, the execution of the purge is suspended and address translation requests of the thread are executed. After the suspension window is ended the purge execution may be resumed. When the thread requires access to the entries to be purged, it may be blocked for preventing the thread sending address translation requests to the TLB and upon ending the purge request execution, the thread may be unblocked and the address translation requests may be executed.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: April 2, 2019
    Assignee: International Business Machines Corporation
    Inventors: Uwe Brandt, Ute Gaertner, Lisa C. Heller, Markus Helms, Thomas Köhler, Frank Lehnert, Jennifer A. Navarro, Rebecca S. Wisniewski
  • Patent number: 10241924
    Abstract: A marking capability is used to provide an indication of whether a block of memory is being used by a guest control program to back an address translation structure. The marking capability includes setting an indicator in one or more locations associated with the block of memory. In a further aspect, the marking capability includes a purging capability that limits the purging of translation look-aside buffers and other such structures based on the marking.
    Type: Grant
    Filed: July 18, 2016
    Date of Patent: March 26, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Christian Jacobi, Anthony Saporito
  • Patent number: 10241925
    Abstract: Systems, apparatuses, and methods for selecting default page sizes in a variable page size translation lookaside buffer (TLB) are disclosed. In one embodiment, a system includes at least one processor, a memory subsystem, and a first TLB. The first TLB is configured to allocate a first entry for a first request responsive to detecting a miss for the first request in the first TLB. Prior to determining a page size targeted by the first request, the first TLB specifies, in the first entry, that the first request targets a page of a first page size. Responsive to determining that the first request actually targets a second page size, the first TLB reissues the first request with an indication that the first request targets the second page size. On the reissue, the first TLB allocates a second entry and specifies the second page size for the first request.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: March 26, 2019
    Assignee: ATI Technologies ULC
    Inventors: Jimshed Mirza, Anthony Chan, Edwin Chi Yeung Pang
  • Patent number: 10224111
    Abstract: A data device includes a memory having a plurality of memory cells configured to store data values in accordance with a predetermined rank modulation scheme that is optional and a memory controller that receives a current error count from an error decoder of the data device for one or more data operations of the flash memory device and selects an operating mode for data scrubbing in accordance with the received error count and a program cycles count.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: March 5, 2019
    Assignee: California Institute of Technology
    Inventors: Yue Li, Jehoshua Bruck
  • Patent number: 10210103
    Abstract: A method and device for checking validity of memory access are provided. A cache is established and initialization is performed; a total cache position index is calculated; when a program performs memory access, a graded cache unit is addressed according to the total cache position index, and it is determined whether address information of the memory block is able to be read from the graded cache unit; when the address information is able to be read, it is determined whether an instrumentation-based memory checking tool is needed for checking the validity of the current memory access; when the address information is not able to be read, the validity of the current memory access is checked by an instrumentation-based memory checking tool, and the address information of the memory block is filled into the graded cache unit when the current memory access is determined to be valid.
    Type: Grant
    Filed: October 23, 2014
    Date of Patent: February 19, 2019
    Assignee: XI'AN ZHONGXING NEW SOFTWARE CO. LTD.
    Inventor: Shilong Wang
  • Patent number: 10210096
    Abstract: Providing for address translation in a virtualized system environment is disclosed herein. By way of example, a memory management apparatus is provided that comprises a shared translation look-aside buffer (TLB) that includes a plurality of translation types, each supporting a plurality of page sizes, one or more processors, and a memory management controller configured to work with the one or more processors. The memory management controller includes logic configured for caching virtual address to physical address translations and intermediate physical address to physical address translations in the shared TLB, logic configured to receive a virtual address for translation from a requester, logic configured to conduct a table walk of a translation table in the shared TLB to determine a translated physical address in accordance with the virtual address, and logic configured to transmit the translated physical address to the requester.
    Type: Grant
    Filed: December 10, 2013
    Date of Patent: February 19, 2019
    Assignee: AMPERE COMPUTING LLC
    Inventor: Amos Ben-Meir
  • Patent number: 10191861
    Abstract: A technique implements memory views using a virtualization layer of a virtualization architecture executing on a node of a network environment. The virtualization layer may include a user mode portion having hyper-processes and a kernel portion having a micro-hypervisor that cooperate to virtualize a guest operating system kernel within a virtual machine (VM) of the node. The micro-hypervisor may further cooperate with the hyper-processes, such as a guest monitor, of the virtualization layer to implement one or more memory views of the VM. As used herein, a memory view is illustratively a hardware resource (i.e., a set of nested page tables) used as a container (i.e., to constrain access to memory of the node) for one or more guest processes of the guest operating system kernel.
    Type: Grant
    Filed: September 6, 2016
    Date of Patent: January 29, 2019
    Assignee: FireEye, Inc.
    Inventors: Udo Steinberg, Osman Abdoul Ismael
  • Patent number: 10191853
    Abstract: An apparatus and method are provided for maintaining address translation data within an address translation cache. Each entry of the address translation cache is arranged to store address translation data used when converting a virtual address into a corresponding physical address of a memory system. Control circuitry is used to perform an allocation process to determine the address translation data to be stored in each entry. When performing the allocation process for a selected entry, the control circuitry is arranged to perform a page table walk process using a virtual address in order to obtain from a page table a plurality of descriptors including a descriptor identified using the virtual address. The control circuitry then determines whether predetermined criteria are met by the plurality of descriptors, the predetermined criteria comprising page alignment criteria and attribute match criteria.
    Type: Grant
    Filed: October 11, 2016
    Date of Patent: January 29, 2019
    Assignee: ARM Limited
    Inventor: Abhishek Raja
  • Patent number: 10185501
    Abstract: An apparatus is described. The apparatus includes a memory controller to interface with a multi-level system memory. The memory controller includes a pinning engine to pin a memory page into a first level of the system memory that is at a higher level than a second level of the system memory.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: January 22, 2019
    Assignee: Intel Corporation
    Inventors: Aravindh V. Anantaraman, Blaise Fanning
  • Patent number: 10180911
    Abstract: A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system.
    Type: Grant
    Filed: June 12, 2017
    Date of Patent: January 15, 2019
    Assignee: Intel Corporation
    Inventors: Steven M. Bennett, Andrew V. Anderson, Gilbert Neiger, Richard Uhlig, Scott Dion Rodgers, Rajesh M. Sankaran, Camron Rust, Sebastian Schoenberg
  • Patent number: 10176111
    Abstract: A marking capability is used to provide an indication of whether a block of memory is being used by a guest control program to back an address translation structure. The marking capability includes setting an indicator in one or more locations associated with the block of memory. In a further aspect, the marking capability includes an invalidation facility based on the setting of the indicators.
    Type: Grant
    Filed: July 18, 2016
    Date of Patent: January 8, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Michael K. Gschwind, Lisa Cranton Heller, Christian Jacobi, Damian L. Osisek, Anthony Saporito
  • Patent number: 10176109
    Abstract: When performing non-sequential accesses to large data sets, hot spots may be avoided by permuting the memory locations being accesses to more evenly spread those accesses across the memory and across multiple memory channels. A permutation step may be used when accessing data, such as to improve the distribution of those memory accesses within the system. Instead of accessing one memory address, that address may be permuted so that another memory address is accessed. Non-sequential accesses to an array may be modified such that each index to the array is permuted to another index in the array. Collisions between pre- and post-translation addresses may be prevented and one-to-one mappings may be used. Permutation mechanisms may be implemented in software, hardware, or a combination of both, with or without the knowledge of the process performing the memory accesses.
    Type: Grant
    Filed: April 20, 2017
    Date of Patent: January 8, 2019
    Assignee: Oracle International Corporation
    Inventors: Timothy L. Harris, David Dice
  • Patent number: 10169279
    Abstract: An input/output control device is connected to an input/output switch which transfers a received input/output instruction to an input/output device whose local address is specified in the input/output instruction. The input/output control device includes a memory and circuitry. The memory stores specific information about a processor and a conversion table for converting a logical address of the input/output device into the local address, with the specific information and the conversion table each being associated with a device group that includes the processor and the input/output device. The circuitry identifies the device group based on the specific information about the processor of sender which information is obtained. The circuitry converts the logical address included in the input/output instruction into the local address which is obtained from the conversion table for the identified device group, and then sends the input/output instruction to the input/output switch.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: January 1, 2019
    Assignee: NEC CORPORATION
    Inventor: Hiroki Yokoyama
  • Patent number: 10162762
    Abstract: A data processing system 4 includes a translation lookaside buffer 6 storing mapping data entries 10 indicative of virtual-to-physical address mappings for different regions of physical addresses. A hint generator 20 coupled to the translation lookaside buffer 6 generates hint data in dependence upon the storage of mapping data entries within the translation lookaside buffer 6. The hint generator 20 tracks the loading of mapping data entries and the eviction of mapping data entries from the translation lookaside buffer 6. The hint data is supplied to a memory controller 8 which controls how data corresponding to respective different regions of physical addresses is stored within a heterogeneous memory system, e.g. the power state of different portions of the memories storing different regions, which type of memory is used to store different regions.
    Type: Grant
    Filed: April 22, 2015
    Date of Patent: December 25, 2018
    Assignee: ARM LIMITED
    Inventors: Geoffrey Blake, Ali Ghassan Saidi, Mitchell Hayenga
  • Patent number: 10162617
    Abstract: Systems and methods for binary translation are disclosed. In some implementations, guest software to run in a Native Client environment is received. The guest software is configured to execute at a specified guest hardware architecture and not within the Native Client environment. A binary translation of the guest software into Native Client compatible machine code is provided using emulation software. The Native Client compatible machine code executes within a sandbox for the Native Client environment. The Native Client compatible machine code is executable within an application. Providing the binary translation of the guest software into the Native Client compatible machine code for execution within the sandbox occurs just in time, during a runtime of the emulated guest software, and without porting or recompiling the guest software. Providing the binary translation interleaves with execution of the emulated guest software.
    Type: Grant
    Filed: April 10, 2015
    Date of Patent: December 25, 2018
    Assignee: Google LLC
    Inventors: Evgeny Eltsin, Nikolay Igotti, Andrey Khalyavin, Dmitry Polukhin
  • Patent number: 10146698
    Abstract: A method and apparatus for reducing dynamic power consumption in a multi-thread content-addressable memory (CAM) is described. The disclosed apparatus includes a first input configured to receive a first virtual address corresponding to a first thread, a second input configured to receive a second virtual address corresponding to a second thread, a register bank including a plurality of registers each configured to store a binary word mapped to one of a plurality of physical addresses, a first comparator bank including a first plurality of comparators each coupled to one of the plurality of registers in a fully-associative configuration and configured to determine whether a first match is present, and a second comparator bank including a second plurality of comparators each coupled to one of the plurality of registers in a fully-associative configuration and configured to determine whether a second match is present.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: December 4, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Anthony J. Bybell
  • Patent number: 10146687
    Abstract: Embodiments of techniques and systems for execution of code with multiple page tables are described. In embodiments, a heterogenous system utilizing multiple processors may use multiple page tables to selectively execute appropriate ones of different versions of executable code. The system may be configured to support use of function pointers to virtual memory addresses. In embodiments, a virtual memory address may be mapped, such as during a code fetch. In embodiments, when a processor seeks to perform a code fetch using the function pointer, a page table associated with the processor may be used to translate the virtual memory address to a physical memory address where code executable by the processor may be found. Usage of multiple page tables may allow the system to support function pointers while utilizing only one virtual memory address for each function that is pointed to. Other embodiments may be described and claimed.
    Type: Grant
    Filed: April 25, 2017
    Date of Patent: December 4, 2018
    Assignee: Intel Corporation
    Inventor: Mike B. Macpherson
  • Patent number: 10140217
    Abstract: The present disclosure relates to a method of operating a hierarchical translation lookaside buffer (TLB). The TLB comprises at least two TLB levels, wherein a given entry of the upper level TLB comprises a portion of bits for indicating related entries in the lower level TLB. The method comprises the following when a TLB miss is encountered for a requested first virtual address. A first table walk is performed to obtain the absolute memory address for the first virtual address. A logical tag is stored. The logical tag comprises the portion of bits that has been identified in association with the first table walk. In response to determining that a concurrent second table walk, of the ongoing first table walk, that has a second virtual address that addresses the same entry in the upper level TLB as the first virtual address is writing in the TLB, the stored logical tag may be incremented. And, the incremented logical tag and the obtained absolute memory address may be stored in the TLB.
    Type: Grant
    Filed: December 17, 2017
    Date of Patent: November 27, 2018
    Assignee: International Business Machines Corporation
    Inventors: Uwe Brandt, Frank S. Lehnert, Thomas G. Koehler, Markus M. Helms, Martin Recktenwald
  • Patent number: 10140216
    Abstract: An apparatus includes processing circuitry to process instructions, some of which may require addresses to be translated. The apparatus also includes address translation circuitry to translate addresses in response to instruction processed by the processing circuitry. Furthermore, the apparatus also includes translation latency measuring circuitry to measure a latency of at least part of an address translation process performed by the address translation circuitry in response to a given instruction.
    Type: Grant
    Filed: January 21, 2016
    Date of Patent: November 27, 2018
    Assignee: ARM LIMITED
    Inventors: Michael John Williams, Michael Filippo, Hazim Shafi
  • Patent number: 10127159
    Abstract: The present disclosure relates to a method of operating a hierarchical translation lookaside buffer (TLB). The TLB comprises at least two TLB levels, wherein a given entry of the upper level TLB comprises a portion of bits for indicating related entries in the lower level TLB. The method comprises the following when a TLB miss is encountered for a requested first virtual address. A first table walk is performed to obtain the absolute memory address for the first virtual address. A logical tag is stored. The logical tag comprises the portion of bits that has been identified in association with the first table walk. In response to determining that a concurrent second table walk, of the ongoing first table walk, that has a second virtual address that addresses the same entry in the upper level TLB as the first virtual address is writing in the TLB, the stored logical tag may be incremented. And, the incremented logical tag and the obtained absolute memory address may be stored in the TLB.
    Type: Grant
    Filed: July 13, 2017
    Date of Patent: November 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Uwe Brandt, Frank S. Lehnert, Thomas G. Koehler, Markus M. Helms, Martin Recktenwald
  • Patent number: 10127071
    Abstract: The invention concerns a multi-core processing system comprising: a first input/output interface (312) configured to transmit data over a first network (313) based on a first network protocol; a second input/output interface (314) configured to transmit data over a second network (315) based on a second network protocol; a plurality of processing cores; and one or more memory devices storing software enabling virtual processing resources of the plurality of processing cores and virtual memory to be assigned to support: a first compartment (303) implementing one or more first virtual machines; a second compartment (304) implementing one or more second virtual machines; and a programmable virtual switch (302) configured to provide an interface between the first and second virtual machines and the first and second input/output interfaces (312, 314).
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: November 13, 2018
    Assignee: Virtual Open Systems
    Inventors: Michele Paolino, Kevin Chappuis, Salvatore Daniele Raho
  • Patent number: 10095498
    Abstract: A provisioning system to automatically determine the appropriate components to install or make available for installation on a target computer system. An example system may comprise: receiving data indicative of a bit-size and a virtual machine extension support of a processing device of the client device; determining that the processing device supports a plurality of bit-size versions of a software component; querying the client device to select a preferred version of the software component associated with the virtual machine extension support; determining that the version of a software component associated with the first bit-size is unavailable; provisioning the version of the software component associated with the virtual machine extension support and the second bit-size to the client device in view of the determination; and notifying the client device when the version of the software component associated with the first bit-size and the virtual machine extension support is available for installation.
    Type: Grant
    Filed: April 25, 2017
    Date of Patent: October 9, 2018
    Assignee: Red Hat, Inc.
    Inventors: Miroslav Suchy, Milan Zazrivec
  • Patent number: 10089220
    Abstract: Methods and apparatus for saving state information resulting from non-idempotent operations are described. A computer system includes a system memory coupled to one or more processors. The system memory comprises at least a non-volatile portion. Elements of state information associated with an executable component that are to be stored within the non-volatile portion are identified. In response to detecting an occurrence of a particular non-idempotent operation that results in the generation of state information, selected elements of information are stored in the non-volatile portion of the system memory. In response to a request subsequent to a failure event, wherein the failure event resulted in a loss of data stored in a volatile portion of the system memory, the state information is read from the non-volatile portion.
    Type: Grant
    Filed: November 1, 2013
    Date of Patent: October 2, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Samuel James McKelvie, Anurag Windlass Gupta
  • Patent number: 10084613
    Abstract: A self adapting driver for controlling datapath hardware elements uses a generic driver and a configuration library to create a set of data structures and methods to map information provided by applications to physical tables. A set of virtual tables is implemented as an interface between the applications and the generic driver. The generic driver uses the configuration library to determine a mapping from the virtual tables to the physical tables. A virtual table schema definition is parsed to create the configuration library, such that changes to the physical infrastructure may be implemented as changes to the virtual table schema definition without adjusting the driver code. Thus automatically generated creation of generic packet forwarding drivers is able to be implemented through the use of a configuration language that defines the meaning of the information stored in the virtual tables.
    Type: Grant
    Filed: September 27, 2012
    Date of Patent: September 25, 2018
    Assignee: Extreme Networks, Inc.
    Inventor: Hamid Assarpour
  • Patent number: 10073620
    Abstract: Memory management is provided within a data processing system 2 which includes a memory protection unit 8 and defines memory regions within the memory address space which extend between base addresses and limit addresses and have respective attributes associated therewith. When a hit occurs within a memory region which is a valid hit, then block data is generated comprising a mask value and a TAG value (derived from the original query address) which may then be used to identify subsequent hits within at least a portion of that region using a bitwise AND. In another embodiment a micro-translation lookaside buffer is reused by the memory protection unit to store page data identifying pages which fall validly within memory regions and may be used to return attribute data for those pages upon subsequent accesses rather than performing the comparison with the base address and the limit addresses.
    Type: Grant
    Filed: October 29, 2015
    Date of Patent: September 11, 2018
    Assignee: ARM Limited
    Inventor: Simon John Craske
  • Patent number: 10067873
    Abstract: A method for operating a data storage device includes: dividing a cache into a plurality of cache areas; grouping a plurality of logical addresses into a plurality of logical address groups; allocating indexes to the respective logical address groups; and matching a read-requested first logical address set, a first cache area where data corresponding to the first logical address set are cached and an empty size of the first cache area, to an index corresponding to a logical address group to which the first logical address set belongs.
    Type: Grant
    Filed: January 20, 2016
    Date of Patent: September 4, 2018
    Assignee: SK Hynix Inc.
    Inventors: Seok Hoon Jung, Ji Hoon Lee
  • Patent number: 10031856
    Abstract: A system for managing virtual memory. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping.
    Type: Grant
    Filed: October 16, 2013
    Date of Patent: July 24, 2018
    Assignee: NVIDIA CORPORATION
    Inventors: Jerome F. Duluk, Jr., Chenghuan Jia, John Mashey, Cameron Buschardt, Sherry Cheung, James Leroy Deming, Samuel H. Duncan, Lucien Dunning, Robert George, Arvind Gopalakrishnan, Mark Hairgrove
  • Patent number: 10025726
    Abstract: A memory management unit (MMU) may manage address translations. The MMU may obtain a first intermediate physical address (IPA) based on a first virtual address (VA) relating to a first memory access request. The MMU may identify, based on the first IPA, a first memory page entry in a second address translation table. The MMU may store, in a second cache memory, a first IPA-to-PA translation based on the identified first memory page entry. The MMU may store, in the second cache memory and in response to the identification of the first memory page entry, one or more additional IPA-to-PA translations that are based on corresponding one or more additional memory page entries in the second address translation table. The one or more additional memory page entries may be contiguous to the first memory page entry.
    Type: Grant
    Filed: October 29, 2014
    Date of Patent: July 17, 2018
    Assignee: STMicroelectronics International N.V.
    Inventors: Herve Sibert, Loic Pallardy
  • Patent number: 10019379
    Abstract: Example devices are disclosed. For example, a device may include a processor, a plurality of translation lookaside buffers, a plurality of switches, and a memory management unit. Each of the translation lookaside buffers may be assigned to a different process of the processor, each of the plurality of switches may include a register for storing a different process identifier, and each of the plurality of switches may be associated with a different one of the translation lookaside buffer buffers. The memory management unit may be for receiving a virtual memory address and a process identifier from the processor and forwarding the process identifier to the plurality of switches. Each of the plurality of switches may be for connecting the memory management unit to a translation associated with the switch when there is a match between the process identifier and the different process identifier stored by the register of the switch.
    Type: Grant
    Filed: February 5, 2018
    Date of Patent: July 10, 2018
    Assignees: AT&T Mobility II LLC, AT&T Intellectual Property I, L.P.
    Inventors: Sheldon Kent Meredith, Brandon B. Hilliard, William Cottrill
  • Patent number: 10002084
    Abstract: An example method of memory management in a virtualized computing system includes: generating a page table hierarchy that includes address translations to first pages of memory that store kernel software and second pages of the memory that store user software; configuring a processor to: 1) implement a first address translation scheme, which uses a first virtual address width, for a hypervisor privilege level; 2) implement a second address translation scheme, which uses a second virtual address width, for supervisor and user privilege levels, where the first virtual address width is larger than the second virtual address width; and 3) use the page table hierarchy for each of the first and second address translation schemes; and executing the kernel software at the hypervisor privilege level and the user software at the user privilege level.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: June 19, 2018
    Assignee: VMware, Inc.
    Inventors: Andrei Warkentin, Cyprien Laplace, Ye Li