Directory Tables (e.g., Dlat, Tlb) Patents (Class 711/207)
-
Patent number: 10534703Abstract: A memory system may include a nonvolatile memory device including a plurality of blocks each including a plurality of pages, and a controller that selects a mapping block from the plurality of blocks, stores address information corresponding to each of other blocks, except for the mapping block and a free block among the plurality of blocks, in each of the plurality of pages, searches for a block including no valid page among the other blocks, and invalidates a page of the mapping block storing the address information corresponding to the searched block.Type: GrantFiled: March 4, 2016Date of Patent: January 14, 2020Assignee: SK hynix Inc.Inventor: Jong-Min Lee
-
Patent number: 10534715Abstract: Operation of a multi-slice processor that includes a plurality of execution slices, a plurality of load/store slices, and one or more page walk caches, where operation includes: receiving, at a load/store slice, an instruction to be issued; determining, at the load/store slice, a process type indicating a source of the instruction to be a host process or a guest process; and determining, in accordance with an allocation policy and in dependence upon the process type, an allocation of an entry of the page walk cache, wherein the page walk cache comprises one or more entries for both host processes and guest processes.Type: GrantFiled: April 22, 2016Date of Patent: January 14, 2020Assignee: International Business Machines CorporationInventors: Dwain A. Hicks, Jonathan H. Raymond, George W. Rohrbaugh, III, Shih-Hsiung S. Tung
-
Patent number: 10534718Abstract: An example apparatus for memory addressing can include an array of memory cells. The apparatus can include a memory cache configured to store at least a portion of an address mapping table. The address mapping table can include a number of regions corresponding to respective amounts of logical address space of the array. The address mapping table can map translation units (TUs) to physical locations in the array. Each one of the number of regions can include a first table. The first table can include entries corresponding to respective TU logical address of the respective amounts of logical address space, respective pointers, and respective offsets. Each one of the number of regions can include a second table. The second table can include entries corresponding to respective physical address ranges of the array. The entries of the second table can include respective physical address fields and corresponding respective count fields.Type: GrantFiled: July 31, 2017Date of Patent: January 14, 2020Assignee: Micron Technology, Inc.Inventor: Jonathan M. Haswell
-
Patent number: 10528476Abstract: A page size hint may be encoded into an unused and reserved field in an effective or virtual address for use by a software page fault handler when handling a page fault associated with the effective or virtual address to enable an application to communicate to an operating system or other software-based translation functionality page size preferences for the allocation of pages of memory and/or to accelerate the search for page table entries in a hardware page table.Type: GrantFiled: May 24, 2016Date of Patent: January 7, 2020Assignee: International Business Machines CorporationInventor: Shakti Kapoor
-
Patent number: 10528480Abstract: An apparatus and method are provided for efficient utilisation of an address translation cache. The apparatus has an address translation cache with a plurality of entries, where each entry stores address translation data used when converting a virtual address into a corresponding physical address of a memory system. Each entry identifies whether the address translation data stored therein is coalesced or non-coalesced address translation data, and also identifies a page size for a page within the memory system that is associated with that address translation data. Control circuitry is responsive to a virtual address, to perform a lookup operation within the address translation cache to produce, for each page size supported by the address translation cache, a hit indication to indicate whether a hit has been detected for an entry storing address translation data of the associated page size.Type: GrantFiled: August 24, 2017Date of Patent: January 7, 2020Assignee: ARM LimitedInventors: Rakesh Shaji Lal, Miles Robert Dooley
-
Patent number: 10521355Abstract: Disclosed is a system, method and/or computer product that includes generating translation requests that are identical but have different expected results, transmitting the translation requests from a MMU tester to a non-core MMU disposed on a processor chip, where the non-core MMU is external to a processing core of the processor chip, and where the MMU tester is disposed on a computing component external to the processor chip. The method also includes receiving memory translation results from the non-core MMU at the MMU tester, comparing the results to determine if there is a flaw in the non-core MMU.Type: GrantFiled: December 15, 2017Date of Patent: December 31, 2019Assignee: International Business Machines CorporationInventors: Manoj Dusanapudi, Shakti Kapoor, Nelson Wu
-
Patent number: 10523786Abstract: I/O bandwidth reduction using storage-level common page information is implemented by a storage server, in response to receiving a request from a client for a page stored at a first virtual address, determining that the first virtual address maps to a page that is a duplicate of a page stored at a second virtual address or that the first and second virtual addresses map to a deduplicated page within a storage system, and transmitting metadata to the client mapping the first virtual address to a second virtual address that also maps to the deduplicated page. For one embodiment, the metadata is transmitted in anticipation of a request for the redundant/deduplicated page via the second virtual address. For an alternate embodiment, the metadata is sent in response to a determination that a page that maps to the second virtual address was previously sent to the client.Type: GrantFiled: June 22, 2018Date of Patent: December 31, 2019Assignee: NetApp Inc.Inventors: Deepak Raghu Kenchammana-Hosekote, Michael R. Eisler, Arthur F. Lent, Rahul Iyer, Shravan Gaonkar
-
Patent number: 10521351Abstract: Processing of a storage operand request identified as restrained is selectively, temporarily suppressed. The processing includes determining whether a storage operand request to a common storage location shared by multiple processing units of a computing environment is restrained, and based on determining that the storage operand request is restrained, then temporarily suppressing requesting access to the common storage location pursuant to the storage operand request. The processing unit performing the processing may proceed with processing of the restrained storage operand request, without performing the suppressing, where the processing can be accomplished using cache private to the processing unit. Otherwise the suppressing may continue until an instruction, or operation of an instruction, associated with the storage operand request is next to complete.Type: GrantFiled: January 12, 2017Date of Patent: December 31, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Bruce C. Giamei, Christian Jacobi, Daniel V. Rosa, Anthony Saporito, Donald W. Schmidt, Chung-Lung K. Shum
-
Patent number: 10515023Abstract: This disclosure is directed to a system for address mapping and translation protection. In one embodiment, processing circuitry may include a virtual machine manager (VMM) to control specific guest linear address (GLA) translations. Control may be implemented in a performance sensitive and secure manner, and may be capable of improving performance for critical linear address page walks over legacy operation by removing some or all of the cost of page walking extended page tables (EPTs) for critical mappings. Alone or in combination with the above, certain portions of a page table structure may be selectively made immutable by a VMM or early boot process using a sub-page policy (SPP). For example, SPP may enable non-volatile kernel and/or user space code and data virtual-to-physical memory mappings to be made immutable (e.g., non-writable) while allowing for modifications to non-protected portions of the OS paging structures and particularly the user space.Type: GrantFiled: April 1, 2016Date of Patent: December 24, 2019Assignee: Intel CorporationInventors: Ravi L. Sahita, Gilbert Neiger, Vedvyas Shanbhogue, David M. Durham, Andrew V. Anderson, David A. Koufaty, Asit K. Mallick, Arumugam Thiyagarajah, Barry E. Huntley, Deepak K. Gupta, Michael Lemay, Joseph F. Cihula, Baiju V. Patel
-
Patent number: 10503664Abstract: This disclosure is directed to a system for address mapping and translation protection. In one embodiment, processing circuitry may include a virtual machine manager (VMM) to control specific guest linear address (GLA) translations. Control may be implemented in a performance sensitive and secure manner, and may be capable of improving performance for critical linear address page walks over legacy operation by removing some or all of the cost of page walking extended page tables (EPTs) for critical mappings. Alone or in combination with the above, certain portions of a page table structure may be selectively made immutable by a VMM or early boot process using a sub-page policy (SPP). For example, SPP may enable non-volatile kernel and/or user space code and data virtual-to-physical memory mappings to be made immutable (e.g., non-writable) while allowing for modifications to non-protected portions of the OS paging structures and particularly the user space.Type: GrantFiled: June 7, 2016Date of Patent: December 10, 2019Assignee: INTEL CORPORATIONInventors: David M. Durham, Ravi L. Sahita, Gilbert Neiger, Vedvyas Shanbhogue, Andrew V. Anderson, Michael Lemay, Joseph F. Cihula, Arumugam Thiyagarajah, Asit K. Mallick, Barry E. Huntley, David A. Koufaty, Deepak K. Gupta, Baiju V. Patel
-
Patent number: 10491662Abstract: Pieces of hardware on which pieces of software are executed are configured to organize computing resources from different computing resource providers so as to facilitate their discovery. A catalog, which stores instances of cloud computing resources and their providers, and a knowledge base, which stores types of computing resources including rules which reveal their discovery, are formed by the software. A curating method is performed to enable semantic search including searching for cloud computing resources that in combination cooperate to satisfy a workload or a task in addition to having a simple computational function. Semantic indexing is performed to facilitate the semantic search.Type: GrantFiled: January 11, 2012Date of Patent: November 26, 2019Assignee: COMPUTENEXT, INC.Inventors: Munirathnam Srikanth, Sundar Kannan, Kevin Dougan, Steve Jamieson, Sriram Subramanian
-
Patent number: 10467012Abstract: An apparatus and method are described for coupling a front end core to an accelerator component (e.g., such as a graphics accelerator). For example, an apparatus is described comprising: an accelerator comprising one or more execution units (EUs) to execute a specified set of instructions; and a front end core comprising a translation lookaside buffer (TLB) communicatively coupled to the accelerator and providing memory access services to the accelerator, the memory access services including performing TLB lookup operations to map virtual to physical addresses on behalf of the accelerator and in response to the accelerator requiring access to a system memory.Type: GrantFiled: December 29, 2016Date of Patent: November 5, 2019Assignee: Intel CorporationInventors: Eliezer Weissmann, Karthikeyan Karthik Vaithianathan, Yoav Zach, Boris Ginzburg, Ronny Ronen
-
Patent number: 10467159Abstract: A memory node controller for a node of a data processing network, the network including at least one computing device and at least one data resource, each data resource addressed by a physical address. The node is configured to couple the at least one computing device with the at least one data resource. Elements of the data processing network are addressed via a system address space. The memory node controller includes a first interface to the at least one data resource, a second interface to the at least one computing device, and a system to physical address translator cache configured to translate a system address in the system address space to a physical address in the physical address space of the at least one data resource.Type: GrantFiled: July 14, 2017Date of Patent: November 5, 2019Assignee: Arm LimitedInventors: Jonathan Curtis Beard, Roxana Rusitoru, Curtis Glenn Dunham
-
Patent number: 10459852Abstract: Memory management systems and methods are provided in which n-bit translation counters are included within page table entry (PTE) data structures to count of number of times that translations are performed using the PTEs of pages. For example, a method for managing memory includes: receiving a virtual address from an executing process, wherein the virtual address references a virtual page frame number (VPFN) in a virtual address space associated with the executing process; accessing a PTE for translating the VPFN to a page frame number (PFN) in physical memory; incrementing a n-bit translation counter within the accessed PTE in response to the translating; and accessing a memory location within the PFN in the physical memory, which corresponds to the virtual address.Type: GrantFiled: July 27, 2017Date of Patent: October 29, 2019Assignee: EMC IP Holding Company LLCInventor: Adrian Michaud
-
Patent number: 10452557Abstract: The processor provides a host computer with a logical volume based on a physical storage device. Based on a command from the host computer, the control device writes, into a memory, address information that associates a logical address in the logical volume with a device address in the physical storage device. The control device receives a command from the host computer and if it is determined that the command is a read command, identifies a first logical address designated by the command and determines whether or not the first logical address is included in the address information. If the first address is included in the address information, the control device specifies a first device address corresponding to the first logical address, reads read data stored in an area indicated by the first device address, and transmits the read data to the host computer.Type: GrantFiled: January 28, 2015Date of Patent: October 22, 2019Assignee: Hitachi, Ltd.Inventors: Hirotoshi Akaike, Norio Shimozono, Kazushi Nakagawa
-
Patent number: 10452558Abstract: Apparatuses, systems, methods, and computer program products are disclosed for address range mapping for memory devices. A system includes a set of non-volatile memory elements accessible using a set of physical addresses and a controller for the set of non-volatile memory elements. A controller is configured to maintain a hierarchical data structure for mapping logical addresses to a set of physical addresses. A hierarchical data structure comprises a plurality of levels with hashed mappings of ranges of logical addresses at range sizes selected based on a relative position of an associated level within the plurality of levels. A controller is configured to receive an I/O request for data of at least one logical address. A controller is configured to satisfy an I/O request using a hashed mapping having a largest available range size to map at least one logical address of the I/O request to one or more physical addresses.Type: GrantFiled: October 5, 2018Date of Patent: October 22, 2019Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.Inventors: Igor Genshaft, Marina Frid
-
Patent number: 10452267Abstract: A storage scheme allocates portions of a logical volume to storage nodes in excess of the capacity of the storage nodes. Slices of the storage nodes and segments of slices are allocated in response to write requests such that actual allocation on the storage nodes is only in response to usage. Segments are identified with virtual segment identifiers that are retained when segments are moved to a different storage node. Logical volumes may therefore be moved seamlessly to different storage nodes to ensure sufficient storage capacity. Data is written to new locations in segments having space and a block map tracks the last segment to which data for a given address is written. Garbage collection is performed to free segments that contain invalid data, i.e. data for addresses that have been subsequently written to.Type: GrantFiled: September 13, 2017Date of Patent: October 22, 2019Assignee: ROBIN SYSTEMS, INC.Inventors: Gurmeet Singh, Dhanashankar Venkatesan, Partha Sarathi Seetala
-
Patent number: 10445247Abstract: Methods, systems, and computer program products are included for switching from a first guest virtual address (GVA)-to-host physical address (HPA) translation mode to a second GVA-to-HPA translation mode. A method includes comparing, by a hypervisor, a number of translation lookaside buffer (TLB) misses to a miss threshold, the hypervisor being in a first GVA-to-HPA translation mode. The method includes switching from the first GVA-to-HPA translation mode to a second GVA-to-HPA translation mode if the number of TLB misses satisfies the miss threshold.Type: GrantFiled: June 20, 2017Date of Patent: October 15, 2019Assignee: Red Hat, Inc.Inventor: Bandan Souryakanta Das
-
Patent number: 10437729Abstract: In an approach for purging an address range from a cache, a processor quiesces a computing system. Cache logic issues a command to purge a section of a cache to higher level memory, wherein the command comprises a starting storage address and a range of storage addresses to be purged. Responsive to each cache of the computing system activating the command, cache logic ends the quiesce of the computing system. Subsequent to ending the quiesce of the computing system, Cache logic purges storage addresses from the cache, based on the command, to the higher level memory.Type: GrantFiled: April 19, 2017Date of Patent: October 8, 2019Assignee: International Business Machines CorporationInventors: Ekaterina M. Ambroladze, Deanna P. D. Berger, Michael A. Blake, Pak-kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy, Chad G. Wilson
-
Patent number: 10423349Abstract: A method that uses a reduced logical and physical address field size for storing data having steps of receiving a set of data to write to a solid state drive, determining a logical address to the set of data, setting a logical offset of the set of data to be equal to a physical block offset modulo of the data and writing the set of data to the solid state drive in locations on solid state drive that accept a size of the address of the set of data is disclosed.Type: GrantFiled: August 23, 2017Date of Patent: September 24, 2019Assignee: Western Digital Technologies, Inc.Inventor: Nicholas James Thomas
-
Patent number: 10423339Abstract: A method may include writing data to a hard drive. In some examples, the method may include receiving, by an extent allocator module, a command to write data. The command may include data and a logical block address (LBA) specified by the host. The method may also include mapping, by the extent allocator module, the LBA specified by the host to a drive LBA. The method may further include sending, from the extent allocator module, a command to write the data at the drive LBA.Type: GrantFiled: October 6, 2015Date of Patent: September 24, 2019Assignee: Western Digital Technologies, Inc.Inventors: Zvonimir Z. Bandic, Cyril Guyot, Adam C. Manzanares, Noah Watkins
-
Patent number: 10409500Abstract: One embodiment provides a memory controller. The memory controller includes logical block address (LBA) section defining logic to define a plurality of LBA sections for a memory device circuitry, each section including a range of LBAs, and each section including a unique indirection-unit (IU) granularity; wherein the IU granularity defines a physical region size of the memory device. The LBA section defining logic also to generate a plurality of logical-to-physical (L2P) tables to map a plurality of LBAs to physical locations of the memory device, each L2P table corresponding to an LBA section. The memory controller also includes LBA section notification logic to notify a file system of the plurality of LBA sections to enable the file system to issue a read and/or write command having an LBA based on an IU granularity associated with an LBA section.Type: GrantFiled: September 8, 2017Date of Patent: September 10, 2019Assignee: Intel CorporationInventors: Sanjeev N. Trika, Peng Li, Jawad B. Khan
-
Patent number: 10409726Abstract: Disclosed in some examples are methods, systems, and machine readable mediums that dynamically adjust the size of an L2P cache in a memory device in response to observed operational conditions. The L2P cache may borrow memory space from a donor memory location, such as a read or write buffer. For example, if the system notices a high amount of read requests, the system may increase the size of the L2P cache at the expense of the write buffer (which may be decreased). Likewise, if the system notices a high amount of write requests, the system may increase the size of the L2P cache at the expense of the read buffer (which may be decreased).Type: GrantFiled: October 30, 2017Date of Patent: September 10, 2019Assignee: Micron Technology, Inc.Inventor: Sebastien Andre Jean
-
Patent number: 10409745Abstract: Interruption facility for adjunct processor queues. In response to a queue transitioning from a no replies pending state to a reply pending state, an interruption is initiated. This interruption signals to a processor that a reply to a request is waiting on the queue. In order for the queue to take advantage of the interruption capability, it is enabled for interruptions.Type: GrantFiled: June 26, 2018Date of Patent: September 10, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Charles W. Gainey, Jr., Klaus Meissner, Damian L. Osisek, Klaus Werner
-
Patent number: 10394558Abstract: Technical solutions are described for out-of-order (OoO) execution of one or more instructions by a processing unit includes receiving, by a load-store unit (LSU) of the processing unit, an OoO window of instructions including a plurality of instructions to be executed OoO, and issuing, by the LSU, instructions from the OoO window. The issuing includes selecting an instruction from the OoO window, the instruction using an effective address. Further, in response to the instruction being a load instruction, it is determined whether the effective address is present in an effective address directory (EAD). In response to the effective address being present in the EAD, the load instruction is issued using the effective address. Further, in response to the instruction being a store instruction, a real address mapped to the effective address is determined from an effective-real translation (ERT) table, and the store instruction is issued using the real address.Type: GrantFiled: October 6, 2017Date of Patent: August 27, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Christopher Gonzalez, Bryan Lloyd, Balaram Sinharoy
-
Patent number: 10387326Abstract: A computer-implemented method includes associating an initial use order with a plurality of target sets of a translation lookaside buffer (TLB), where the initial use order indicates an order of use of the plurality of target sets. The plurality of target sets are associated with an initial least-recently-used (LRU) state based on the initial use order. A new use order for the plurality of target sets is generated. Generating the new use order includes moving a first target set to a least-recently-used position, responsive to a purge of the first target set. The LRU state of the plurality of target sets is updated based on the new use order, responsive to the purge of the first target set. The first target set is identified as eligible for replacement according to an LRU replacement policy of the TLB, based at least in part on the purge of the first target set.Type: GrantFiled: November 14, 2017Date of Patent: August 20, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Uwe Brandt, Markus Helms, Thomas Köhler, Frank Lehnert
-
Patent number: 10380031Abstract: Ensuring forward progress for nested translations in a memory management unit (MMU) including receiving a plurality of nested translation requests, wherein each of the plurality of nested translation requests requires at least one congruence class lock; detecting, using a congruence class scoreboard, a collision of the plurality of nested translation requests based on the required congruence class locks; quiescing, in response to detecting the collision of the plurality of nested translation requests, a translation pipeline in the MMU including switching operation of the translation pipeline from a multi-thread mode to a single-thread mode and marking a first subset of the plurality of nested translation requests as high-priority nested translation requests; and servicing the high-priority nested translation requests through the translation pipeline in the single-thread mode.Type: GrantFiled: November 27, 2017Date of Patent: August 13, 2019Assignee: International Business Machines CorporationInventors: Guy L. Guthrie, Jody B. Joyner, Jon K. Kriegel, Bradley Nelson, Charles D. Wait
-
Patent number: 10380015Abstract: Apparatus, systems, methods, and computer program products are disclosed for logical address range mapping for storage devices. A system includes a set of non-volatile memory elements accessible using a set of physical addresses. A system includes a controller for a set of non-volatile memory elements. A controller is configured to maintain a hierarchical data structure comprising a plurality of levels for mapping logical addresses to a set of physical address. A controller is configured to receive an input/output (I/O) request. A controller is configured to translate a logical address for an I/O request to a physical address utilizing a largest mapped logical address range that includes the logical address in a hierarchical data structure. A level includes one or more mappings between logical address ranges and physical address ranges at a range size for the level.Type: GrantFiled: June 30, 2017Date of Patent: August 13, 2019Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.Inventors: Igor Genshaft, Marina Frid
-
Patent number: 10372618Abstract: An apparatus and method are provided for maintaining address translation data within an address translation cache. The address translation cache has a plurality of entries, where each entry is used to store address translation data used when converting a virtual address into a corresponding physical address of a memory system. Control circuitry is used to perform an allocation process to determine the address translation data to be stored in each entry. The address translation cache is used to store address translation data of a plurality of different types representing address translation data specified at respective different levels of address translation within a multiple-level page table walk. The plurality of different types comprises a final level type of address translation data that identifies a full translation from the virtual address to the physical address, and at least one intermediate level type of address translation data that identifies a partial translation of the virtual address.Type: GrantFiled: October 14, 2016Date of Patent: August 6, 2019Assignee: ARM LimitedInventors: Miles Robert Dooley, Abhishek Raja, Barry Duane Williamson, Huzefa Moiz Sanjeliwala
-
Patent number: 10360383Abstract: Systems and methods are disclosed for detecting high-level functionality of an application executing on a computing device. One method includes storing, in a secure memory, an application-specific virtual address mapping table for an application. The application-specific virtual address mapping table comprises a plurality of virtual address offsets in the application binary code mapped to corresponding target application functionalities. In response to launching the application, a process-specific virtual address mapping table is generated for an instance of an application process to be executed. The process-specific virtual address mapping table defines actual virtual addresses corresponding to the target application functionalities using the virtual address offsets in the application-specific virtual address mapping table.Type: GrantFiled: March 21, 2017Date of Patent: July 23, 2019Assignee: QUALCOMM IncorporatedInventors: Subrato Kumar De, Sajo Sunder George
-
Patent number: 10353831Abstract: Systems, apparatuses and methods may provide for verifying, from outside a trusted computing base of a computing system, an identity an enclave instance prior to the enclave instance being launched in the trusted computing base, determining a memory location of the enclave instance and confirming that the memory location is local to the computing system. In one example, the enclave instance is a proxy enclave instance, wherein communications are conducted with one or more additional enclave instances in the trusted computing base via the proxy enclave instance and an unencrypted channel.Type: GrantFiled: December 24, 2015Date of Patent: July 16, 2019Assignee: Intel CorporationInventors: Scott H. Robinson, Ravi L. Sahita, Mark W. Shanahan, Karanvir S. Grewal, Nitin V. Sarangdhar, Carlos V. Rozas, Bo Zhang, Shanwei Cen
-
Patent number: 10331581Abstract: A high-performance computing system, method, and storage medium manage accesses to multiple memory modules of a computing node, the modules having different access latencies. The node allocates its resources into pools according to pre-determined memory access criteria. When another computing node requests a memory access, the node determines whether the request satisfies any of the criteria. If so, the associated pool of resources is selected for servicing the request; if not, a default pool is selected. The node then services the request if the pool of resources is sufficient. Otherwise, various error handling processes are performed. Each memory access criterion may relate to a memory address range assigned to a memory module, a type of request, a relationship between the nodes, a configuration of the requesting node, or a combination of these.Type: GrantFiled: April 10, 2017Date of Patent: June 25, 2019Assignee: Hewlett Packard Enterprise Development LPInventors: Frank R. Dropps, Michael E. Malewicki
-
Patent number: 10331883Abstract: A method, computer program product, and system for managing container security, the method including consuming a recipe queue on a first checker container, wherein the first checker container is on a first host of a computer system, and the recipe queue comprises a predefined set of rules, storing the first checker container recipe queue result in the first checker container, comparing the first checker container recipe queue result with an expected result of the recipe queue, wherein the expected result is stored in the first checker container, and following a first fail procedure from a plurality of fail procedures, based on the first checker container recipe queue result not matching the expected result.Type: GrantFiled: September 28, 2016Date of Patent: June 25, 2019Assignee: International Business Machines CorporationInventors: Rafael Camarda Silva Folco, Breno Henrique Leitão, Rafael Peria de Sene
-
Patent number: 10324858Abstract: Access control circuitry comprises: a detector to detect a memory address translation between a virtual memory address in a virtual memory address space and a physical memory address in a physical memory address space, provided in response to a translation request by further circuitry; an address translation memory, to store data representing a set of physical memory addresses previously provided to the further circuitry in response to translation requests by the further circuitry; an interface to receive a physical memory address from the further circuitry for a memory access by the further circuitry; a comparator to compare a physical memory address received from the further circuitry with the set of physical addresses stored by the address translation memory, and to permit access, by the further circuitry, to a physical address included in the set of one or more physical memory addresses.Type: GrantFiled: June 12, 2017Date of Patent: June 18, 2019Assignee: ARM LimitedInventors: Bruce James Mathewson, Phanindra Kumar Mannava, Matthew Lucien Evans, Paul Gilbert Meyer, Andrew Brookfield Swaine
-
Patent number: 10318172Abstract: Cache operation in a multi-threaded processor uses a small memory structure referred to as a way enable table that stores an index to an n-way set associative cache. The way enable table includes one entry for each entry in the n-way set associative cache and each entry in the way enable table is arranged to store a thread ID. The thread ID in an entry in the way enable table is the ID of the thread associated with a data item stored in the corresponding entry in the n-way set associative cache. Prior to reading entries from the n-way set associative cache identified by an index parameter, the ways in the cache are selective enabled based on a comparison of the current thread ID and the thread IDs stored in entries in the way enable table which are identified by the same index parameter.Type: GrantFiled: October 1, 2015Date of Patent: June 11, 2019Assignee: MIPS Tech, LLCInventor: Philip Day
-
Patent number: 10318435Abstract: Ensuring forward progress for nested translations in a memory management unit (MMU) including receiving a plurality of nested translation requests, wherein each of the plurality of nested translation requests requires at least one congruence class lock; detecting, using a congruence class scoreboard, a collision of the plurality of nested translation requests based on the required congruence class locks; quiescing, in response to detecting the collision of the plurality of nested translation requests, a translation pipeline in the MMU including switching operation of the translation pipeline from a multi-thread mode to a single-thread mode and marking a first subset of the plurality of nested translation requests as high-priority nested translation requests; and servicing the high-priority nested translation requests through the translation pipeline in the single-thread mode.Type: GrantFiled: August 22, 2017Date of Patent: June 11, 2019Assignee: International Business Machines CorporationInventors: Guy L. Guthrie, Jody B. Joyner, Jon K. Kriegel, Bradley Nelson, Charles D. Wait
-
Patent number: 10310974Abstract: Disclosed herein are systems and methods for isolating input/output computing resources. In some embodiments, a host device may include a processor and logic coupled with the processor, to identify a tag identifier (Tag ID) for a process or container of the host device. The Tag ID may identify a queue pair of a hardware device of the host device for an outbound transaction from the processor to the hardware device, to be conducted by the process or container. Logic may further map the Tag ID to a Process Address Space Identifier (PASID) associated with an inbound transaction from the hardware device to the processor that used the identified queue pair. The process or container may use the PASID to conduct the outbound transaction via the identified queue pair. Other embodiments may be disclosed and/or claimed.Type: GrantFiled: September 25, 2015Date of Patent: June 4, 2019Assignee: Intel CorporationInventors: Cunming Liang, Edwin Verplank, David E. Cohen, Danny Zhou
-
Patent number: 10296465Abstract: A processor architecture utilizing a L3 translation lookaside buffer (TLB) to reduce page walks. The processor includes multiple cores, where each core includes a L1 TLB and a L2 TLB. The processor further includes a L3 TLB that is shared across the processor cores, where the L3 TLB is implemented in off-chip or die-stack dynamic random-access memory. Furthermore, the processor includes a page table connected to the L3 TLB, where the page table stores a mapping between virtual addresses and physical addresses. In such an architecture, by having the L3 TLB with a very large capacity, performance may be improved, such as execution time, by eliminating page walks, which requires multiple data accesses.Type: GrantFiled: July 20, 2017Date of Patent: May 21, 2019Assignee: Board of Regents, The University of Texas SystemInventors: Lizy K. John, Jee Ho Ryoo, Nagendra Gulur
-
Patent number: 10289562Abstract: A computer-implemented method includes associating an initial use order with a plurality of target sets of a translation lookaside buffer (TLB), where the initial use order indicates an order of use of the plurality of target sets. The plurality of target sets are associated with an initial least-recently-used (LRU) state based on the initial use order. A new use order for the plurality of target sets is generated. Generating the new use order includes moving a first target set to a least-recently-used position, responsive to a purge of the first target set. The LRU state of the plurality of target sets is updated based on the new use order, responsive to the purge of the first target set. The first target set is identified as eligible for replacement according to an LRU replacement policy of the TLB, based at least in part on the purge of the first target set.Type: GrantFiled: June 15, 2017Date of Patent: May 14, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Uwe Brandt, Markus Helms, Thomas Köhler, Frank Lehnert
-
Patent number: 10255209Abstract: Embodiments of the present invention disclose a method, computer program product, and system for determining statistics corresponding to data transfer operations. In one embodiment, the computer implemented method includes the steps of receiving a request from an input/output (I/O) device to perform a data transfer operation between the I/O device and a memory, generating an entry in an input/output memory management unit (IOMMU) corresponding to the data transfer operation, wherein the entry in the IOMMU includes at least an indication of a processor chip that corresponds to the memory of the data transfer operation, monitoring the data transfer operation between the I/O device and the memory, determining statistics corresponding to the monitored data transfer operation, wherein the determined statistics include at least: the I/O device that performed the data transfer operation, the processor chip that corresponds to the memory of the data transfer operation, and an amount of data transferred.Type: GrantFiled: February 7, 2017Date of Patent: April 9, 2019Assignee: International Business Machines CorporationInventors: Srinivas Kotta, Mehulkumar J. Patel, Venkatesh Sainath, Vaidyanathan Srinivasan
-
Patent number: 10255202Abstract: Various embodiments are generally directed to the providing for mutual authentication and secure distributed processing of multi-party data. In particular, an experiment may be submitted to include the distributed processing of private data owned by multiple distrustful entities. Private data providers may authorize the experiment and securely transfer the private data for processing by trusted computing nodes in a pool of trusted computing nodes.Type: GrantFiled: September 30, 2016Date of Patent: April 9, 2019Assignee: INTEL CORPORATIONInventors: Hormuzd M. Khosravi, Baiju V. Patel
-
Patent number: 10248575Abstract: Disclosed herein is a method for operating translation look-aside buffers, TLBs, in a multiprocessor system. A purge request is received for purging one or more entries in the TLB. When the thread doesn't require access to the entries to be purged the execution of the purge request at the TLB may start. When an address translation request is rejected due to the TLB purge, a suspension time window may be set. During the suspension time window, the execution of the purge is suspended and address translation requests of the thread are executed. After the suspension window is ended the purge execution may be resumed. When the thread requires access to the entries to be purged, it may be blocked for preventing the thread sending address translation requests to the TLB and upon ending the purge request execution, the thread may be unblocked and the address translation requests may be executed.Type: GrantFiled: February 20, 2018Date of Patent: April 2, 2019Assignee: International Business Machines CorporationInventors: Uwe Brandt, Ute Gaertner, Lisa C. Heller, Markus Helms, Thomas Köhler, Frank Lehnert, Jennifer A. Navarro, Rebecca S. Wisniewski
-
Patent number: 10241925Abstract: Systems, apparatuses, and methods for selecting default page sizes in a variable page size translation lookaside buffer (TLB) are disclosed. In one embodiment, a system includes at least one processor, a memory subsystem, and a first TLB. The first TLB is configured to allocate a first entry for a first request responsive to detecting a miss for the first request in the first TLB. Prior to determining a page size targeted by the first request, the first TLB specifies, in the first entry, that the first request targets a page of a first page size. Responsive to determining that the first request actually targets a second page size, the first TLB reissues the first request with an indication that the first request targets the second page size. On the reissue, the first TLB allocates a second entry and specifies the second page size for the first request.Type: GrantFiled: February 15, 2017Date of Patent: March 26, 2019Assignee: ATI Technologies ULCInventors: Jimshed Mirza, Anthony Chan, Edwin Chi Yeung Pang
-
Patent number: 10241924Abstract: A marking capability is used to provide an indication of whether a block of memory is being used by a guest control program to back an address translation structure. The marking capability includes setting an indicator in one or more locations associated with the block of memory. In a further aspect, the marking capability includes a purging capability that limits the purging of translation look-aside buffers and other such structures based on the marking.Type: GrantFiled: July 18, 2016Date of Patent: March 26, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jonathan D. Bradbury, Christian Jacobi, Anthony Saporito
-
Patent number: 10224111Abstract: A data device includes a memory having a plurality of memory cells configured to store data values in accordance with a predetermined rank modulation scheme that is optional and a memory controller that receives a current error count from an error decoder of the data device for one or more data operations of the flash memory device and selects an operating mode for data scrubbing in accordance with the received error count and a program cycles count.Type: GrantFiled: December 20, 2017Date of Patent: March 5, 2019Assignee: California Institute of TechnologyInventors: Yue Li, Jehoshua Bruck
-
Patent number: 10210103Abstract: A method and device for checking validity of memory access are provided. A cache is established and initialization is performed; a total cache position index is calculated; when a program performs memory access, a graded cache unit is addressed according to the total cache position index, and it is determined whether address information of the memory block is able to be read from the graded cache unit; when the address information is able to be read, it is determined whether an instrumentation-based memory checking tool is needed for checking the validity of the current memory access; when the address information is not able to be read, the validity of the current memory access is checked by an instrumentation-based memory checking tool, and the address information of the memory block is filled into the graded cache unit when the current memory access is determined to be valid.Type: GrantFiled: October 23, 2014Date of Patent: February 19, 2019Assignee: XI'AN ZHONGXING NEW SOFTWARE CO. LTD.Inventor: Shilong Wang
-
Patent number: 10210096Abstract: Providing for address translation in a virtualized system environment is disclosed herein. By way of example, a memory management apparatus is provided that comprises a shared translation look-aside buffer (TLB) that includes a plurality of translation types, each supporting a plurality of page sizes, one or more processors, and a memory management controller configured to work with the one or more processors. The memory management controller includes logic configured for caching virtual address to physical address translations and intermediate physical address to physical address translations in the shared TLB, logic configured to receive a virtual address for translation from a requester, logic configured to conduct a table walk of a translation table in the shared TLB to determine a translated physical address in accordance with the virtual address, and logic configured to transmit the translated physical address to the requester.Type: GrantFiled: December 10, 2013Date of Patent: February 19, 2019Assignee: AMPERE COMPUTING LLCInventor: Amos Ben-Meir
-
Patent number: 10191853Abstract: An apparatus and method are provided for maintaining address translation data within an address translation cache. Each entry of the address translation cache is arranged to store address translation data used when converting a virtual address into a corresponding physical address of a memory system. Control circuitry is used to perform an allocation process to determine the address translation data to be stored in each entry. When performing the allocation process for a selected entry, the control circuitry is arranged to perform a page table walk process using a virtual address in order to obtain from a page table a plurality of descriptors including a descriptor identified using the virtual address. The control circuitry then determines whether predetermined criteria are met by the plurality of descriptors, the predetermined criteria comprising page alignment criteria and attribute match criteria.Type: GrantFiled: October 11, 2016Date of Patent: January 29, 2019Assignee: ARM LimitedInventor: Abhishek Raja
-
Patent number: 10191861Abstract: A technique implements memory views using a virtualization layer of a virtualization architecture executing on a node of a network environment. The virtualization layer may include a user mode portion having hyper-processes and a kernel portion having a micro-hypervisor that cooperate to virtualize a guest operating system kernel within a virtual machine (VM) of the node. The micro-hypervisor may further cooperate with the hyper-processes, such as a guest monitor, of the virtualization layer to implement one or more memory views of the VM. As used herein, a memory view is illustratively a hardware resource (i.e., a set of nested page tables) used as a container (i.e., to constrain access to memory of the node) for one or more guest processes of the guest operating system kernel.Type: GrantFiled: September 6, 2016Date of Patent: January 29, 2019Assignee: FireEye, Inc.Inventors: Udo Steinberg, Osman Abdoul Ismael
-
Patent number: 10185501Abstract: An apparatus is described. The apparatus includes a memory controller to interface with a multi-level system memory. The memory controller includes a pinning engine to pin a memory page into a first level of the system memory that is at a higher level than a second level of the system memory.Type: GrantFiled: September 25, 2015Date of Patent: January 22, 2019Assignee: Intel CorporationInventors: Aravindh V. Anantaraman, Blaise Fanning