Directory Tables (e.g., Dlat, Tlb) Patents (Class 711/207)
  • Patent number: 10776113
    Abstract: Technical solutions are described for out-of-order (OoO) execution of one or more instructions by a processing unit includes receiving, by a load-store unit (LSU) of the processing unit, an OoO window of instructions including a plurality of instructions to be executed OoO, and issuing, by the LSU, instructions from the OoO window. The issuing includes selecting an instruction from the OoO window, the instruction using an effective address. Further, in response to the instruction being a load instruction, it is determined whether the effective address is present in an effective address directory (EAD). In response to the effective address being present in the EAD, the load instruction is issued using the effective address. Further, in response to the instruction being a store instruction, a real address mapped to the effective address is determined from an effective-real translation (ERT) table, and the store instruction is issued using the real address.
    Type: Grant
    Filed: June 21, 2019
    Date of Patent: September 15, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Christopher Gonzalez, Bryan Lloyd, Balaram Sinharoy
  • Patent number: 10754790
    Abstract: A memory management unit (MMU) is disclosed. The MMU is configured to receive a translation request from a processing system, wherein the translation request specifies a virtual address to be translated, search a page table stored in a physical memory system for a page table entry that specifies the virtual address, receive a translation lookaside buffer invalidation (TLBI) signal from the processing system, wherein the TLBI signal specifies the virtual address, in response to receiving the TLBI signal specifying the virtual address, invalidate a translation lookaside buffer (TLB) entry in a TLB, wherein the invalidated TLB entry specifies the virtual address and restart the search of the page table for the page table entry that specifies the virtual address.
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: August 25, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Jason Norman, Piyush Patel, Rakesh Anigundi, Sadayan Ghows Ghani Sadayan Ebramsah Mo Abdul
  • Patent number: 10754787
    Abstract: The present disclosure includes apparatuses and methods related to virtual address tables. An example method comprises generating an object file that comprises: an instruction comprising a number of arguments; and an address table comprising a number of indexed address elements. Each one of the number of indexed address elements can correspond to a virtual address of a respective one of the number of arguments, wherein the address table can serves as a target for the number of arguments. The method can include storing the object file in a memory.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: August 25, 2020
    Assignee: Micron Technology, Inc.
    Inventors: John D. Leidel, Kyle B. Wheeler
  • Patent number: 10747683
    Abstract: Example devices are disclosed. For example, a device may include a processor, a plurality of translation lookaside buffers, a plurality of switches, and a memory management unit. Each of the translation lookaside buffers may be assigned to a different process of the processor, each of the plurality of switches may include a register for storing a different process identifier, and each of the plurality of switches may be associated with a different one of the translation lookaside buffer buffers. The memory management unit may be for receiving a virtual memory address and a process identifier from the processor and forwarding the process identifier to the plurality of switches. Each of the plurality of switches may be for connecting the memory management unit to a translation associated with the switch when there is a match between the process identifier and the different process identifier stored by the register of the switch.
    Type: Grant
    Filed: July 9, 2018
    Date of Patent: August 18, 2020
    Assignees: AT&T Mobility II LLC, AT&T Intellectual Property I, L.P.
    Inventors: Sheldon Kent Meredith, Brandon B. Hilliard, William Cottrill
  • Patent number: 10747682
    Abstract: A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: August 18, 2020
    Assignee: Intel Corporation
    Inventors: Steven M. Bennett, Andrew V. Anderson, Gilbert Neiger, Richard Uhlig, Scott Dion Rodgers, Rajesh M. Sankaran, Camron Rust, Sebastian Schoenberg
  • Patent number: 10725930
    Abstract: Aspects of the present disclosure configure a memory sub-system to map logical memory addresses to physical memory addresses using a tree data structure in the memory sub-system. For example, a memory sub-system controller of the memory sub-system can generate a tree data structure on cache memory to cache, from non-volatile memory, at least one portion of mapping data, where the non-volatile memory is implemented by a set of memory components separate from the cache memory. The mapping data, stored on the non-volatile memory, can map a set of logical memory addresses to a corresponding set of physical memory addresses of the non-volatile memory, and a node of the tree data structure can comprise node data that describes a memory area of the non-volatile memory where data is written across a sequence of contiguous physical memory addresses.
    Type: Grant
    Filed: August 27, 2018
    Date of Patent: July 28, 2020
    Assignee: Micron Technology, Inc.
    Inventor: David Aaron Palmer
  • Patent number: 10725932
    Abstract: Systems, methods, and computer programs are disclosed for optimizing headless virtual memory management in a system on chip (SoC) with global translation lookaside buffer shootdown. The SoC comprises an application processor configured to execute a headful virtual machine and one or more SoC processing devices configured to execute a corresponding headless virtual machine. The method comprises issuing a virtual machine mapping command with a headless virtual machine having a first virtual machine identifier. In response to the virtual machine mapping command, a current value stored in a hardware register in the application processor is saved. The first virtual machine identifier associated with the headless virtual machine is loaded into the hardware register. A translation lookaside buffer (TLB) invalidate command is issued while the first virtual machine identifier is loaded in the hardware register.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: July 28, 2020
    Assignee: Qualcomm Incorporated
    Inventors: Thomas Zeng, Samar Asbe, Adam Openshaw
  • Patent number: 10725928
    Abstract: A system and method for efficiently performing maintenance on a cache. In various embodiments, control logic in a cache controller or elsewhere receives an indication for invalidating a range of virtual-to-physical mappings in a given translation lookaside buffer (TLB). The logic determines a first latency to invalidate entries of the TLB based on a number of addresses in the range and a number of supported page sizes simultaneously stored in the TLB. The logic determines a second latency based on a number of entries in the TLB. If the first latency is greater, then the logic traverses through each TLB entry and invalidates TLB entries storing a virtual address within the range. If the first latency is smaller, then the logic traverses through each address in the range and invalidates TLB entries storing a virtual address within the range.
    Type: Grant
    Filed: January 9, 2019
    Date of Patent: July 28, 2020
    Assignee: Apple Inc.
    Inventors: Brian R. Mestan, Pradeep Kanapathipillai, Joshua William Smith
  • Patent number: 10713095
    Abstract: A method of controlling a multi-core processor includes allocating at least one core of the multi-core processor to at least one process for execution; generating a translation table with respect to the at least one process to translate a logical ID of the at least one core allocated to the at least one process to a physical ID; and controlling the at least one process based on the translation table generated with respect to the at least one process.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: July 14, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Donghoon Yoo, Bernhard Egger
  • Patent number: 10698768
    Abstract: The present invention relates generally to backups and more specifically to virtual machine (VM) backups including file exclusion. Aspects of the present invention related to using a specialized buffer to identify a file for exclusion. In embodiments, a file system used by the VM can be used to search for the specialized buffer. In embodiments, when the specialized buffer is located, offsets are noted related to the file associated with the specialized buffer. In embodiments, the offsets are used to zero out blocks associated with the offsets. Thus, the file can be effectively excluded from the backup.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: June 30, 2020
    Assignee: Druva, Inc.
    Inventor: Sudeep Jathar
  • Patent number: 10698836
    Abstract: Disclosed herein is a virtual cache and method in a processor for supporting multiple threads on the same cache line. The processor is configured to support virtual memory and multiple threads. The virtual cache directory includes a plurality of directory entries, each entry is associated with a cache line. Each cache line has a corresponding tag. The tag includes a logical address, an address space identifier, a real address bit indicator, and a per thread validity bit for each thread that accesses the cache line. When a subsequent thread determines that the cache line is valid for that thread the validity bit for that thread is set, while not invalidating any validity bits for other threads.
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: June 30, 2020
    Assignee: International Business Machines Corporation
    Inventors: Markus Helms, Christian Jacobi, Ulrich Mayer, Martin Recktenwald, Johannes C. Reichart, Anthony Saporito, Aaron Tsai
  • Patent number: 10684957
    Abstract: An apparatus and method performs neighborhood-aware virtual to physical address translations. A coalescing opportunity for a first virtual address is determined, based on completing a memory access corresponding to a page walk for a second virtual address. Metadata corresponding to the first virtual address is provided to a page table walk buffer based on the coalescing opportunity and a page walk for the first virtual address is performed based on the metadata corresponding to the first virtual address.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: June 16, 2020
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Michael W. Lebeane, Seunghee Shin
  • Patent number: 10678476
    Abstract: A memory system and an operating method thereof are provided. The memory system includes a controller buffer memory, a host interface configured to receive non-linear host physical addresses and write data from a host, a host address translation section configured to map the non-linear host physical addresses to linear virtual addresses, and a host control section configured to buffer the write data in the controller buffer memory according to the linear virtual addresses.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: June 9, 2020
    Assignee: SK hynix Inc.
    Inventor: Dong Sop Lee
  • Patent number: 10664409
    Abstract: A data storage apparatus includes a nonvolatile memory device including block groups, a random access memory including a sequential map table that stores a sequential map entry for consecutive sequential write logical addresses, among write addresses received from a host apparatus, greater than or equal to a predetermined threshold number, and a processor configured to determine whether or not first sequential write logical addresses are present among logical addresses corresponding to physical addresses for a first region of a first block group when a write operation for the first region of the first block group in response to a write request received from the host apparatus is completed, generate a first sequential map entry for the first sequential write logical addresses when the first sequential write logical addresses are present, and store the first sequential map entry in the sequential map table.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: May 26, 2020
    Assignee: SK hynix Inc.
    Inventors: In Jung, Byeong Gyu Park, Young Ick Cho
  • Patent number: 10664400
    Abstract: An apparatus has an address translation cache with entries for storing address translation data. Partition configuration storage circuitry stores multiple sets of programmable configuration data each corresponding to a partition identifier identifying a corresponding software execution environment or master device and specifying a corresponding subset of entries of the cache. In response to a translation lookup request specifying a target address and a requesting partition identifier, control circuitry triggers a lookup operation to identify whether the target address hits or misses in the corresponding subset of entries specified by the set of partition configuration data for the requesting partition identifier.
    Type: Grant
    Filed: July 11, 2017
    Date of Patent: May 26, 2020
    Assignee: ARM Limited
    Inventor: Andrew Brookfield Swaine
  • Patent number: 10659556
    Abstract: Provided is a system and method for managing a progressive hybrid web application by storing web content in a local cache. In an example, the method includes receiving a HTTP request from a web application executing on the user device, determining whether requested web content included in the HTTP request is stored in a local cache storage of the user device, and in response to determining the web content associated with the HTTP request is stored in the local cache storage, fetching the web content from the local cache storage and transferring the fetched web content to the web application. According to various aspects, the web content can be provided to the web application executing on the user device via the local cache even in a situation where the user device is not connected to the remote host server of the web application.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: May 19, 2020
    Assignee: SAP SE
    Inventors: Nathan Wang, Walter Mak, Michael Tsz Hong Sung, Edward Chao
  • Patent number: 10657085
    Abstract: Interruption facility for adjunct processor queues. In response to a queue transitioning from a no replies pending state to a reply pending state, an interruption is initiated. This interruption signals to a processor that a reply to a request is waiting on the queue. In order for the queue to take advantage of the interruption capability, it is enabled for interruptions.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: May 19, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Charles W. Gainey, Jr., Klaus Meissner, Damian L. Osisek, Klaus Werner
  • Patent number: 10628158
    Abstract: Technical solutions are described for out-of-order (OoO) execution of one or more instructions by a processing unit includes receiving, by a load-store unit (LSU) of the processing unit, an OoO window of instructions including a plurality of instructions to be executed OoO, and issuing, by the LSU, instructions from the OoO window. The issuing includes selecting an instruction from the OoO window, the instruction using an effective address. Further, in response to the instruction being a load instruction, it is determined whether the effective address is present in an effective address directory (EAD). In response to the effective address being present in the EAD, the load instruction is issued using the effective address. Further, in response to the instruction being a store instruction, a real address mapped to the effective address is determined from an effective-real translation (ERT) table, and the store instruction is issued using the real address.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: April 21, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Christopher Gonzalez, Bryan Lloyd, Balaram Sinharoy
  • Patent number: 10613860
    Abstract: A tagged memory organized is into memory chunks. Each memory chunk has a data field, a type field and an owner address field. The type field indicates type of data stored in the data field. The owner address field indicates which objects own which memory chunks. A memory manager has exclusive ability to allocate the memory chunks, deallocate the memory chunks, write to the memory chunks and read the memory chunks.
    Type: Grant
    Filed: November 9, 2016
    Date of Patent: April 7, 2020
    Assignee: ARM Limited
    Inventor: Vincent Roger Maurice Belliard
  • Patent number: 10613989
    Abstract: A host machine uses a range-based address translation system rather than a conventional page-based system. This enables address translation to be performed with improved efficiency, particularly when nest virtual machines are used. A data processing system utilizes range-based address translation to provide fast address translation for virtual machines that use virtual address space.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: April 7, 2020
    Assignee: Arm Limited
    Inventors: Roxana Rusitoru, Jonathan Curtis Beard, Curtis Glenn Dunham
  • Patent number: 10606751
    Abstract: An input/output (I/O) device arranged to receive an information element including a payload, determine control information from the information element, classify the information element based on the control information, and issue a write to one of a plurality of computer-readable media based on the classification of the information element, the write to cause the payload to be written to the one of the plurality of computer-readable media.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: March 31, 2020
    Assignee: INTEL CORPORATION
    Inventors: Andrew Cunningham, Mark D. Gray, Alexander Leckey, Chris MacNamara, Stephen T. Palermo, Pierre Laurent, Niall D. McDonnell, Tomasz Kantecki, Patrick Fleming
  • Patent number: 10599569
    Abstract: A technique for operating a memory management unit (MMU) of a processor includes the MMU detecting that one or more address translation invalidation requests are indicated for an accelerator unit (AU). In response to detecting that the invalidation requests are indicated, the MMU issues a raise barrier request for the AU. In response to detecting a raise barrier response from the AU to the raise barrier request the MMU issues the invalidation requests to the AU. In response to detecting an address translation invalidation response from the AU to each of the invalidation requests, the MMU issues a lower barrier request to the AU. In response to detecting a lower barrier response from the AU to the lower barrier request, the MMU resumes handling address translation check-in and check-out requests received from the AU.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: March 24, 2020
    Assignee: International Business Machines Corporation
    Inventors: Bartholomew Blaner, Jay G. Heaslip, Robert D. Herzl, Jody B. Joyner
  • Patent number: 10599835
    Abstract: Embodiments are disclosed to mitigate the meltdown vulnerability by selectively using page table isolation. Page table isolation is enabled for 64-bit applications, so that unprivileged areas in the kernel address space cannot be accessed in user mode due to speculative execution by the processor. On the other hand, page table isolation is disabled for 32-bit applications thereby providing mapping into unprivileged areas in the kernel address space. However, speculative execution is limited to a 32-bit address space in a 32-bit application, and s access to unprivileged areas in the kernel address space can be inhibited.
    Type: Grant
    Filed: April 23, 2018
    Date of Patent: March 24, 2020
    Assignee: VMWARE, INC.
    Inventors: Nadav Amit, Dan Tsafrir, Michael Wei
  • Patent number: 10579410
    Abstract: A method for a virtual machine executed by a hypervisor including receiving a first request to execute a first guest application in the virtual machine, and locating, in view of a first memory context tag associated with the first guest application, an intermediate guest page table including a first mapping of a first range of guest virtual addresses (GVAs) to a first range of guest intermediate addresses (GIAs). The first range of GIAs is allocated for the first guest application in a guest physical address space separate from other addresses in the guest physical address space mapped to random access memory (RAM) for the virtual machine. The method also includes executing the first guest application using the first mapping of the first range of GVAs to the first range of GIAs allocated for the first guest application.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: March 3, 2020
    Assignee: Red Hat, Inc.
    Inventors: Michael Tsirkin, Andrea Arcangeli
  • Patent number: 10552937
    Abstract: Embodiments are generally directed to a scalable memory interface for a graphical processor unit. An embodiment of an apparatus includes a graphical processing unit (GPU) including multiple autonomous engines; a common memory interface for the autonomous engines; and a memory management unit for the common memory interface, the memory management unit including multiple engine modules, wherein each of the engine modules includes a translation-lookaside buffer (TLB) that is dedicated to providing address translation for memory requests for a respective autonomous engine of the plurality of autonomous engines, and a TLB miss tracking mechanism that provides tracking for the respective autonomous engine.
    Type: Grant
    Filed: January 10, 2018
    Date of Patent: February 4, 2020
    Assignee: INTEL CORPORATION
    Inventors: Niranjan Cooray, Nicolas Kacevas, Altug Koker, Parth Damani, Satyanarayana Nekkalapu
  • Patent number: 10552338
    Abstract: An apparatus and method are provided for making efficient use of address translation cache resources. The apparatus has an address translation cache having a plurality of entries, where each entry is used to store address translation data used when converting a virtual address into a corresponding physical address of a memory system. Each item of address translation data has a page size indication for a page within the memory system that is associated with that address translation data. Allocation circuitry performs an allocation process to determine the address translation data to be stored in each entry. Further, mode control circuitry is used to switch a mode of operation of the apparatus between a non-skewed mode and at least one skewed mode, dependent on a page size analysis operation.
    Type: Grant
    Filed: February 21, 2017
    Date of Patent: February 4, 2020
    Assignee: ARM Limited
    Inventors: Abhishek Raja, Michael Filippo
  • Patent number: 10552340
    Abstract: A method and apparatus for performing memory access operations during a memory relocation in a computing system are disclosed. In response to initiating a relocation operation from a source region of memory to a destination region of memory, copying one or more lines of the source region to the destination region, and activating a mirror operation mode in a communication circuit coupled to one or more devices included in the computing system. In response to receiving an access request from a device, reading previously stored data from the source region, and in response to determining the access request includes a write request, storing new data included in the write request to locations in both the source and destination regions.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: February 4, 2020
    Assignee: Oracle International Corporation
    Inventors: John Feehrer, Patrick Stabile, Gregory Onufer, John Johnson
  • Patent number: 10545877
    Abstract: An apparatus and method are provided for accessing an address translation cache. The address translation cache has a plurality of entries, where each entry is used to store address translation data used when converting a virtual address into a corresponding physical address of a memory system. The virtual address is generated from a plurality of source values. Allocation circuitry is responsive to received address translation data, to allocate an entry within the address translation cache to store the received address translation data. A hash value indication is associated with the allocated entry, where the hash value indication is computed from the plurality of source values used to generate a virtual address associated with the received address translation data.
    Type: Grant
    Filed: April 5, 2018
    Date of Patent: January 28, 2020
    Assignee: Arm Limited
    Inventors: Abhishek Raja, Michael Filippo
  • Patent number: 10534718
    Abstract: An example apparatus for memory addressing can include an array of memory cells. The apparatus can include a memory cache configured to store at least a portion of an address mapping table. The address mapping table can include a number of regions corresponding to respective amounts of logical address space of the array. The address mapping table can map translation units (TUs) to physical locations in the array. Each one of the number of regions can include a first table. The first table can include entries corresponding to respective TU logical address of the respective amounts of logical address space, respective pointers, and respective offsets. Each one of the number of regions can include a second table. The second table can include entries corresponding to respective physical address ranges of the array. The entries of the second table can include respective physical address fields and corresponding respective count fields.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: January 14, 2020
    Assignee: Micron Technology, Inc.
    Inventor: Jonathan M. Haswell
  • Patent number: 10534719
    Abstract: A data processing network includes a network of devices addressable via a system address space, the network including a computing device configured to execute an application in a virtual address space. A virtual-to-system address translation circuit is configured to translate a virtual address to a system address. A memory node controller has a first interface to a data resource addressable via a physical address space, a second interface to the computing device, and a system-to-physical address translation circuit, configured to translate a system address in the system address space to a corresponding physical address in the physical address space of the data resource. The virtual-to-system mapping may be a range table buffer configured to retrieve a range table entry comprising an offset address of a range together with a virtual address base and an indicator of the extent of the range.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: January 14, 2020
    Assignee: Arm Limited
    Inventors: Jonathan Curtis Beard, Roxana Rusitoru, Curtis Glenn Dunham
  • Patent number: 10534715
    Abstract: Operation of a multi-slice processor that includes a plurality of execution slices, a plurality of load/store slices, and one or more page walk caches, where operation includes: receiving, at a load/store slice, an instruction to be issued; determining, at the load/store slice, a process type indicating a source of the instruction to be a host process or a guest process; and determining, in accordance with an allocation policy and in dependence upon the process type, an allocation of an entry of the page walk cache, wherein the page walk cache comprises one or more entries for both host processes and guest processes.
    Type: Grant
    Filed: April 22, 2016
    Date of Patent: January 14, 2020
    Assignee: International Business Machines Corporation
    Inventors: Dwain A. Hicks, Jonathan H. Raymond, George W. Rohrbaugh, III, Shih-Hsiung S. Tung
  • Patent number: 10534703
    Abstract: A memory system may include a nonvolatile memory device including a plurality of blocks each including a plurality of pages, and a controller that selects a mapping block from the plurality of blocks, stores address information corresponding to each of other blocks, except for the mapping block and a free block among the plurality of blocks, in each of the plurality of pages, searches for a block including no valid page among the other blocks, and invalidates a page of the mapping block storing the address information corresponding to the searched block.
    Type: Grant
    Filed: March 4, 2016
    Date of Patent: January 14, 2020
    Assignee: SK hynix Inc.
    Inventor: Jong-Min Lee
  • Patent number: 10528476
    Abstract: A page size hint may be encoded into an unused and reserved field in an effective or virtual address for use by a software page fault handler when handling a page fault associated with the effective or virtual address to enable an application to communicate to an operating system or other software-based translation functionality page size preferences for the allocation of pages of memory and/or to accelerate the search for page table entries in a hardware page table.
    Type: Grant
    Filed: May 24, 2016
    Date of Patent: January 7, 2020
    Assignee: International Business Machines Corporation
    Inventor: Shakti Kapoor
  • Patent number: 10528480
    Abstract: An apparatus and method are provided for efficient utilisation of an address translation cache. The apparatus has an address translation cache with a plurality of entries, where each entry stores address translation data used when converting a virtual address into a corresponding physical address of a memory system. Each entry identifies whether the address translation data stored therein is coalesced or non-coalesced address translation data, and also identifies a page size for a page within the memory system that is associated with that address translation data. Control circuitry is responsive to a virtual address, to perform a lookup operation within the address translation cache to produce, for each page size supported by the address translation cache, a hit indication to indicate whether a hit has been detected for an entry storing address translation data of the associated page size.
    Type: Grant
    Filed: August 24, 2017
    Date of Patent: January 7, 2020
    Assignee: ARM Limited
    Inventors: Rakesh Shaji Lal, Miles Robert Dooley
  • Patent number: 10521351
    Abstract: Processing of a storage operand request identified as restrained is selectively, temporarily suppressed. The processing includes determining whether a storage operand request to a common storage location shared by multiple processing units of a computing environment is restrained, and based on determining that the storage operand request is restrained, then temporarily suppressing requesting access to the common storage location pursuant to the storage operand request. The processing unit performing the processing may proceed with processing of the restrained storage operand request, without performing the suppressing, where the processing can be accomplished using cache private to the processing unit. Otherwise the suppressing may continue until an instruction, or operation of an instruction, associated with the storage operand request is next to complete.
    Type: Grant
    Filed: January 12, 2017
    Date of Patent: December 31, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bruce C. Giamei, Christian Jacobi, Daniel V. Rosa, Anthony Saporito, Donald W. Schmidt, Chung-Lung K. Shum
  • Patent number: 10521355
    Abstract: Disclosed is a system, method and/or computer product that includes generating translation requests that are identical but have different expected results, transmitting the translation requests from a MMU tester to a non-core MMU disposed on a processor chip, where the non-core MMU is external to a processing core of the processor chip, and where the MMU tester is disposed on a computing component external to the processor chip. The method also includes receiving memory translation results from the non-core MMU at the MMU tester, comparing the results to determine if there is a flaw in the non-core MMU.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: December 31, 2019
    Assignee: International Business Machines Corporation
    Inventors: Manoj Dusanapudi, Shakti Kapoor, Nelson Wu
  • Patent number: 10523786
    Abstract: I/O bandwidth reduction using storage-level common page information is implemented by a storage server, in response to receiving a request from a client for a page stored at a first virtual address, determining that the first virtual address maps to a page that is a duplicate of a page stored at a second virtual address or that the first and second virtual addresses map to a deduplicated page within a storage system, and transmitting metadata to the client mapping the first virtual address to a second virtual address that also maps to the deduplicated page. For one embodiment, the metadata is transmitted in anticipation of a request for the redundant/deduplicated page via the second virtual address. For an alternate embodiment, the metadata is sent in response to a determination that a page that maps to the second virtual address was previously sent to the client.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: December 31, 2019
    Assignee: NetApp Inc.
    Inventors: Deepak Raghu Kenchammana-Hosekote, Michael R. Eisler, Arthur F. Lent, Rahul Iyer, Shravan Gaonkar
  • Patent number: 10515023
    Abstract: This disclosure is directed to a system for address mapping and translation protection. In one embodiment, processing circuitry may include a virtual machine manager (VMM) to control specific guest linear address (GLA) translations. Control may be implemented in a performance sensitive and secure manner, and may be capable of improving performance for critical linear address page walks over legacy operation by removing some or all of the cost of page walking extended page tables (EPTs) for critical mappings. Alone or in combination with the above, certain portions of a page table structure may be selectively made immutable by a VMM or early boot process using a sub-page policy (SPP). For example, SPP may enable non-volatile kernel and/or user space code and data virtual-to-physical memory mappings to be made immutable (e.g., non-writable) while allowing for modifications to non-protected portions of the OS paging structures and particularly the user space.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: December 24, 2019
    Assignee: Intel Corporation
    Inventors: Ravi L. Sahita, Gilbert Neiger, Vedvyas Shanbhogue, David M. Durham, Andrew V. Anderson, David A. Koufaty, Asit K. Mallick, Arumugam Thiyagarajah, Barry E. Huntley, Deepak K. Gupta, Michael Lemay, Joseph F. Cihula, Baiju V. Patel
  • Patent number: 10503664
    Abstract: This disclosure is directed to a system for address mapping and translation protection. In one embodiment, processing circuitry may include a virtual machine manager (VMM) to control specific guest linear address (GLA) translations. Control may be implemented in a performance sensitive and secure manner, and may be capable of improving performance for critical linear address page walks over legacy operation by removing some or all of the cost of page walking extended page tables (EPTs) for critical mappings. Alone or in combination with the above, certain portions of a page table structure may be selectively made immutable by a VMM or early boot process using a sub-page policy (SPP). For example, SPP may enable non-volatile kernel and/or user space code and data virtual-to-physical memory mappings to be made immutable (e.g., non-writable) while allowing for modifications to non-protected portions of the OS paging structures and particularly the user space.
    Type: Grant
    Filed: June 7, 2016
    Date of Patent: December 10, 2019
    Assignee: INTEL CORPORATION
    Inventors: David M. Durham, Ravi L. Sahita, Gilbert Neiger, Vedvyas Shanbhogue, Andrew V. Anderson, Michael Lemay, Joseph F. Cihula, Arumugam Thiyagarajah, Asit K. Mallick, Barry E. Huntley, David A. Koufaty, Deepak K. Gupta, Baiju V. Patel
  • Patent number: 10491662
    Abstract: Pieces of hardware on which pieces of software are executed are configured to organize computing resources from different computing resource providers so as to facilitate their discovery. A catalog, which stores instances of cloud computing resources and their providers, and a knowledge base, which stores types of computing resources including rules which reveal their discovery, are formed by the software. A curating method is performed to enable semantic search including searching for cloud computing resources that in combination cooperate to satisfy a workload or a task in addition to having a simple computational function. Semantic indexing is performed to facilitate the semantic search.
    Type: Grant
    Filed: January 11, 2012
    Date of Patent: November 26, 2019
    Assignee: COMPUTENEXT, INC.
    Inventors: Munirathnam Srikanth, Sundar Kannan, Kevin Dougan, Steve Jamieson, Sriram Subramanian
  • Patent number: 10467012
    Abstract: An apparatus and method are described for coupling a front end core to an accelerator component (e.g., such as a graphics accelerator). For example, an apparatus is described comprising: an accelerator comprising one or more execution units (EUs) to execute a specified set of instructions; and a front end core comprising a translation lookaside buffer (TLB) communicatively coupled to the accelerator and providing memory access services to the accelerator, the memory access services including performing TLB lookup operations to map virtual to physical addresses on behalf of the accelerator and in response to the accelerator requiring access to a system memory.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: November 5, 2019
    Assignee: Intel Corporation
    Inventors: Eliezer Weissmann, Karthikeyan Karthik Vaithianathan, Yoav Zach, Boris Ginzburg, Ronny Ronen
  • Patent number: 10467159
    Abstract: A memory node controller for a node of a data processing network, the network including at least one computing device and at least one data resource, each data resource addressed by a physical address. The node is configured to couple the at least one computing device with the at least one data resource. Elements of the data processing network are addressed via a system address space. The memory node controller includes a first interface to the at least one data resource, a second interface to the at least one computing device, and a system to physical address translator cache configured to translate a system address in the system address space to a physical address in the physical address space of the at least one data resource.
    Type: Grant
    Filed: July 14, 2017
    Date of Patent: November 5, 2019
    Assignee: Arm Limited
    Inventors: Jonathan Curtis Beard, Roxana Rusitoru, Curtis Glenn Dunham
  • Patent number: 10459852
    Abstract: Memory management systems and methods are provided in which n-bit translation counters are included within page table entry (PTE) data structures to count of number of times that translations are performed using the PTEs of pages. For example, a method for managing memory includes: receiving a virtual address from an executing process, wherein the virtual address references a virtual page frame number (VPFN) in a virtual address space associated with the executing process; accessing a PTE for translating the VPFN to a page frame number (PFN) in physical memory; incrementing a n-bit translation counter within the accessed PTE in response to the translating; and accessing a memory location within the PFN in the physical memory, which corresponds to the virtual address.
    Type: Grant
    Filed: July 27, 2017
    Date of Patent: October 29, 2019
    Assignee: EMC IP Holding Company LLC
    Inventor: Adrian Michaud
  • Patent number: 10452558
    Abstract: Apparatuses, systems, methods, and computer program products are disclosed for address range mapping for memory devices. A system includes a set of non-volatile memory elements accessible using a set of physical addresses and a controller for the set of non-volatile memory elements. A controller is configured to maintain a hierarchical data structure for mapping logical addresses to a set of physical addresses. A hierarchical data structure comprises a plurality of levels with hashed mappings of ranges of logical addresses at range sizes selected based on a relative position of an associated level within the plurality of levels. A controller is configured to receive an I/O request for data of at least one logical address. A controller is configured to satisfy an I/O request using a hashed mapping having a largest available range size to map at least one logical address of the I/O request to one or more physical addresses.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: October 22, 2019
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Igor Genshaft, Marina Frid
  • Patent number: 10452557
    Abstract: The processor provides a host computer with a logical volume based on a physical storage device. Based on a command from the host computer, the control device writes, into a memory, address information that associates a logical address in the logical volume with a device address in the physical storage device. The control device receives a command from the host computer and if it is determined that the command is a read command, identifies a first logical address designated by the command and determines whether or not the first logical address is included in the address information. If the first address is included in the address information, the control device specifies a first device address corresponding to the first logical address, reads read data stored in an area indicated by the first device address, and transmits the read data to the host computer.
    Type: Grant
    Filed: January 28, 2015
    Date of Patent: October 22, 2019
    Assignee: Hitachi, Ltd.
    Inventors: Hirotoshi Akaike, Norio Shimozono, Kazushi Nakagawa
  • Patent number: 10452267
    Abstract: A storage scheme allocates portions of a logical volume to storage nodes in excess of the capacity of the storage nodes. Slices of the storage nodes and segments of slices are allocated in response to write requests such that actual allocation on the storage nodes is only in response to usage. Segments are identified with virtual segment identifiers that are retained when segments are moved to a different storage node. Logical volumes may therefore be moved seamlessly to different storage nodes to ensure sufficient storage capacity. Data is written to new locations in segments having space and a block map tracks the last segment to which data for a given address is written. Garbage collection is performed to free segments that contain invalid data, i.e. data for addresses that have been subsequently written to.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: October 22, 2019
    Assignee: ROBIN SYSTEMS, INC.
    Inventors: Gurmeet Singh, Dhanashankar Venkatesan, Partha Sarathi Seetala
  • Patent number: 10445247
    Abstract: Methods, systems, and computer program products are included for switching from a first guest virtual address (GVA)-to-host physical address (HPA) translation mode to a second GVA-to-HPA translation mode. A method includes comparing, by a hypervisor, a number of translation lookaside buffer (TLB) misses to a miss threshold, the hypervisor being in a first GVA-to-HPA translation mode. The method includes switching from the first GVA-to-HPA translation mode to a second GVA-to-HPA translation mode if the number of TLB misses satisfies the miss threshold.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: October 15, 2019
    Assignee: Red Hat, Inc.
    Inventor: Bandan Souryakanta Das
  • Patent number: 10437729
    Abstract: In an approach for purging an address range from a cache, a processor quiesces a computing system. Cache logic issues a command to purge a section of a cache to higher level memory, wherein the command comprises a starting storage address and a range of storage addresses to be purged. Responsive to each cache of the computing system activating the command, cache logic ends the quiesce of the computing system. Subsequent to ending the quiesce of the computing system, Cache logic purges storage addresses from the cache, based on the command, to the higher level memory.
    Type: Grant
    Filed: April 19, 2017
    Date of Patent: October 8, 2019
    Assignee: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Deanna P. D. Berger, Michael A. Blake, Pak-kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy, Chad G. Wilson
  • Patent number: 10423339
    Abstract: A method may include writing data to a hard drive. In some examples, the method may include receiving, by an extent allocator module, a command to write data. The command may include data and a logical block address (LBA) specified by the host. The method may also include mapping, by the extent allocator module, the LBA specified by the host to a drive LBA. The method may further include sending, from the extent allocator module, a command to write the data at the drive LBA.
    Type: Grant
    Filed: October 6, 2015
    Date of Patent: September 24, 2019
    Assignee: Western Digital Technologies, Inc.
    Inventors: Zvonimir Z. Bandic, Cyril Guyot, Adam C. Manzanares, Noah Watkins
  • Patent number: 10423349
    Abstract: A method that uses a reduced logical and physical address field size for storing data having steps of receiving a set of data to write to a solid state drive, determining a logical address to the set of data, setting a logical offset of the set of data to be equal to a physical block offset modulo of the data and writing the set of data to the solid state drive in locations on solid state drive that accept a size of the address of the set of data is disclosed.
    Type: Grant
    Filed: August 23, 2017
    Date of Patent: September 24, 2019
    Assignee: Western Digital Technologies, Inc.
    Inventor: Nicholas James Thomas