Directory Tables (e.g., Dlat, Tlb) Patents (Class 711/207)
  • Patent number: 11256633
    Abstract: A processing system includes at least one core, a plurality of accelerator function unit (AFU) and a memory access unit. The memory access unit includes at least one pipeline resource and an arbitrator. The core develops a plurality of tasks. Each of the AFU is used to execute at least one of the tasks which corresponds to several memory access requests. The arbitrator selects one of the AFUs using a round-robin method at each clock period to transmit a corresponding memory access request of the selected AFU to the pipeline resource, so that the selected AFU executes the memory access request through the pipeline resource to read or write data related to the task.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: February 22, 2022
    Assignee: SHANGHAI ZHAOXIN SEMICONDUCTOR CO., LTD.
    Inventors: Xiaoyang Li, Chen Chen, Zongpu Qi, Tao Li, Xuehua Han, Wei Zhao, Dongxue Gao
  • Patent number: 11243801
    Abstract: Systems and methods for memory management for virtual machines. An example method may comprise determining that a first memory page and a second memory page are mapped to respective guest addresses that are contiguous in a guest address space of a virtual machine running, wherein the first memory page is mapped to a first guest address, determining that the first memory page and the second memory page are mapped to respective host addresses that are not contiguous in a host address space of the host computer system, tracking modifications of the first memory page, causing the virtual machine to copy the first memory page to a third memory page, such that the third memory page and the second memory page are mapped to respective contiguous host addresses, and in response to determining that the first guest page has not been modified, mapping the first guest address to the third memory page.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: February 8, 2022
    Assignee: Red Hat, Inc.
    Inventors: Michael Tsirkin, Andrea Arcangeli
  • Patent number: 11243864
    Abstract: An instruction may be associated with a memory address. During execution of the instruction, the memory address may be translated to a next level memory address. The instruction may also be marked for address tracing. If the instruction is marked for address tracing, then during execution of the instruction, the memory address and the next level memory address may be recorded.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: February 8, 2022
    Assignee: International Business Machines Corporation
    Inventors: Bryant Cockcroft, John A. Schumann, Debapriya Chatterjee, Larry Leitner, Kevin Barnett, Karen Yokum
  • Patent number: 11232536
    Abstract: An apparatus to facilitate data prefetching is disclosed. The apparatus includes a memory, one or more execution units (EUs) to execute a plurality of processing threads and prefetch logic to prefetch pages of data from the memory to assist in the execution of the plurality of processing threads.
    Type: Grant
    Filed: February 14, 2020
    Date of Patent: January 25, 2022
    Assignee: INTEL CORPORATION
    Inventors: Adam T. Lake, Guei-Yuan Lueh, Balaji Vembu, Murali Ramadoss, Prasoonkumar Surti, Abhishek R. Appu, Altug Koker, Subramaniam M. Maiyuran, Eric C. Samson, David J. Cowperthwaite, Zhi Wang, Kun Tian, David Puffer, Brian T. Lewis
  • Patent number: 11231930
    Abstract: The present disclosure provides methods, systems, and non-transitory computer readable media for fetching data for an accelerator. The methods include detecting an attempt to access a first page of data that is not stored on a primary storage unit of the accelerator; and responsive to detecting the attempt to access the first page of data: assessing activity of the accelerator; determining, based on the assessed activity of the accelerator, a prefetch granularity size; and transferring a chunk of contiguous pages of data of the prefetch granularity size from a memory system connected to the accelerator into the primary storage unit, wherein the transferred chunk of contiguous pages of data include the first page of data.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: January 25, 2022
    Assignee: Alibaba Group Holding Limited
    Inventors: Yongbin Gu, Pengcheng Li, Tao Zhang
  • Patent number: 11232042
    Abstract: Process dedicated in-memory translation lookaside buffers (TLBs) (mTLBs) for augmenting a memory management unit (MMU) TLB for translating virtual addresses (VAs) to physical addresses (PA) in a processor-based system is disclosed. In disclosed examples, a dedicated in-memory TLB is supported in system memory for each process so that one process's cached page table entries do not displace another process's cached page table entries. When a process is scheduled to execute in a central processing unit (CPU), the in-memory TLB address stored for such process can be used by page table walker circuit in the CPU MMU to access the dedicated in-memory TLB for executing the process to perform VA to PA translations in the event of a TLB miss to the MMU TLB. If a TLB miss occurs to the in-memory TLB, the page table walker circuit can walk the page table in the MMU.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: January 25, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Madhavan Thirukkurungudi Venkataraman, Thomas Philip Speier
  • Patent number: 11231927
    Abstract: In one embodiment, an apparatus includes: an accelerator to execute instructions; an accelerator request decoder coupled to the accelerator to perform a first level decode of requests from the accelerator and direct the requests based on the first level decode, the accelerator request decoder including a memory map to identify a first address range associated with a local memory and a second address range associated with a system memory; and a non-coherent request router coupled to the accelerator request decoder to receive non-coherent requests from the accelerator request decoder and perform a second level decode of the non-coherent requests, the non-coherent request router to route first non-coherent requests to a sideband router of the first die and to direct second non-coherent requests to a computing die. Other embodiments are described and claimed.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: January 25, 2022
    Assignee: Intel Corporation
    Inventors: Lakshminarayana Pappu, Robert D. Adler, Amit Kumar Srivastava, Aravindh Anantaraman
  • Patent number: 11210308
    Abstract: A metadata table manager receives a request for time series data associated with a device, where the request comprises a device identifier associated with the device, and where the time series data comprises a most recently received data element associated with the device. The metadata table manager determines a metadata table that associates the device identifier with one or more time periods during which data associated with the device has been received, and accesses a metadata table entry for the device identifier that includes an indication of a number of data elements received at the most recent time period of the one or more time periods. The metadata table manager queries a time series data store for the first time series data based on the first time period, and outputs a portion of the first time series data, wherein the portion at least comprises the most recently received data element.
    Type: Grant
    Filed: May 13, 2016
    Date of Patent: December 28, 2021
    Assignee: Ayla Networks, Inc.
    Inventors: Pankaj Gupta, Haoqing Geng, Sudha Sundaresan
  • Patent number: 11200175
    Abstract: There is provided a data processing apparatus that includes memory circuitry that provides a physical address space, which is logically divided into a plurality of memory segments and stores a plurality of accessors with associated validity indicators. Each of the accessors controls access to a region of the physical address space in dependence on at least its associated validity indicator. Tracking circuitry tracks which of the memory segments contain the accessors and invalidation circuitry responds to a request to invalidate an accessor by determining a set of equivalent accessors with reference to the tracking circuitry, and invalidating the accessor and the equivalent accessors by setting the associated validity indicator of each of the accessor and the equivalent accessors to indicate that the accessor and the equivalent accessors are invalid.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: December 14, 2021
    Assignee: Arm Limited
    Inventor: Matthias Lothar Boettcher
  • Patent number: 11194583
    Abstract: Speculative execution using a page-level tracked load order queue includes: determining that a first load instruction targets a determined memory region; and in response to the first load instruction targeting the determined memory region, adding an entry to a page-level tracked load order queue instead of a load order queue, where the entry indicates a page address of a target of the first load instruction.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: December 7, 2021
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Krishnan V. Ramani
  • Patent number: 11194497
    Abstract: A computer-implemented method for providing tenant aware, variable length, deduplication of data stored on a non-transitory computer readable storage medium. The method is performed at least in part by circuitry and the data comprises a plurality of data items. Each of the plurality of data items is associated with a particular tenant of a group of tenants that store data on the storage medium.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: December 7, 2021
    Assignee: Bottomline Technologies, Inc.
    Inventors: Zenon Buratta, Andy Dobbels
  • Patent number: 11194736
    Abstract: A memory controller may include a map cache configured to store one or more of a plurality of map data sub-segments respectively corresponding to a plurality of sub-areas included in each of the plurality of areas, and a map data manager configured to generate information about a map data sub-segment to be provided to a host and which is determined based on a read count for the memory device, and generate information about a map data segment to be deleted from the host and which is determined based on the read count for the memory device and a memory of the host.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: December 7, 2021
    Assignee: SK hynix Inc.
    Inventors: Hye Mi Kang, Eu Joon Byun
  • Patent number: 11194618
    Abstract: This accelerator control device includes: a decision unit which, in processing flow information representing a flow by which a task generated according to a program executed by an accelerator processes data, decides, from among the data, temporary data temporarily generated during execution of a program; a determination unit which, on the basis of an execution status of the task by the accelerator and the processing flow information, determines, for every task using the decided data among the temporary data, whether or not execution of the task has been completed; and a deletion unit which deletes the decided data stored in a memory of the accelerator when execution of every task using the decided data has been completed, whereby, degradation, of processing performance by an accelerator, which occurs when a size of data to be processed by the accelerator is large is avoided.
    Type: Grant
    Filed: June 7, 2018
    Date of Patent: December 7, 2021
    Assignee: NEC CORPORATION
    Inventors: Jun Suzuki, Yuki Hayashi
  • Patent number: 11194651
    Abstract: A method, apparatus, and system for handling a failure of a hardware cryptography/compression accelerator is disclosed. The operations comprise: detecting that a hardware cryptography/compression accelerator at a first data storage system has failed; determining one or more failed cryptography and/or compression operation tasks that were submitted to the hardware cryptography/compression accelerator but were not completed due to the failure of the hardware cryptography/compression accelerator; and performing a remedial operation in response to the hardware cryptography/compression accelerator failure to prevent a systemic failure.
    Type: Grant
    Filed: October 11, 2019
    Date of Patent: December 7, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Wei Lin, Yujuan Li, Tao Chen, Yong Zou, Rahul Ugale
  • Patent number: 11194718
    Abstract: A data processing apparatus is provided, which includes a cache to store operations produced by decoding instructions fetched from memory. The cache is indexed by virtual addresses of the instructions in the memory. Receiving circuitry receives an incoming invalidation request that references a physical address in the memory. Invalidation circuitry invalidates entries in the cache where the virtual address corresponds with the physical address. Coherency is thereby achieved when using a cache that is indexed using virtual addresses.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: December 7, 2021
    Assignee: Arm Limited
    Inventors: Yasuo Ishii, Matthew Andrew Rafacz
  • Patent number: 11169930
    Abstract: Systems, methods and apparatuses of fine grain data migration in using Memory as a Service (MaaS) are described. For example, a memory status map can be used to identify the cache availability of sub-regions (e.g., cache lines) of a borrowed memory region (e.g., a borrowed remote memory page). Before accessing a virtual memory address in a sub-region, the memory status map is checked. If the sub-region has cache availability in the local memory, the memory management unit uses a physical memory address converted from the virtual memory address to make memory access. Otherwise, the sub-region is cached from the borrowed memory region to the local memory, before the physical memory address is used.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: November 9, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Dmitri Yudanov, Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert
  • Patent number: 11169721
    Abstract: A memory system may include a memory device including a plurality of memory blocks; and a controller suitable for controlling the memory device. The controller may include a monitor suitable for monitoring valid data ratios of a first area and a second area and a processor suitable for comparing a first valid data ratio of the first area to a first threshold value, comparing a second valid data ratio of the second area to a second threshold value, and reallocating a target reserved memory block, which is allocated to the second area, to the first area according to the two comparison results.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: November 9, 2021
    Assignee: SK hynix Inc.
    Inventor: Eu-Joon Byun
  • Patent number: 11163695
    Abstract: An information handling system and method for translating virtual addresses to real addresses including a processor for processing data; memory devices for storing the data; and a memory controller configured to control accesses to the memory devices, where the processor is configured, in response to a request to translate a first virtual address to a second physical address, to send from the processor to the memory controller a page directory base and a plurality of memory offsets. The memory controller is configured to: read from the memory devices a first level page directory table using the page directory base and a first level memory offset; combine the first level page directory table with a second level memory offset; and read from the memory devices a second level page directory table using the first level page directory table and the second level memory offset.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: November 2, 2021
    Assignee: International Business Machines Corporation
    Inventors: Mohit Karve, Brian W. Thompto
  • Patent number: 11151042
    Abstract: A memory device for storing data comprises a memory bank comprising a plurality of addressable memory cells, wherein the memory bank is divided into a plurality of segments. The memory device also comprises a cache memory operable for storing a second plurality of data words, wherein each data word of the second plurality of data words is either awaiting write verification associated with the memory bank or is to be re-written into the memory bank. Also, the cache memory is divided into a plurality of segments, wherein each segment of the cache memory is direct mapped to a corresponding segment of the memory bank, wherein an address of each of the second plurality of data words is mapped to a corresponding segment in the cache memory, and wherein data words from a particular segment of the memory bank only get stored in a corresponding direct mapped segment of the cache memory.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: October 19, 2021
    Assignee: Integrated Silicon Solution, (Cayman) Inc.
    Inventors: Benjamin Louie, Neal Berger, Lester Crudele
  • Patent number: 11144479
    Abstract: This disclosure is directed to a system for address mapping and translation protection. In one embodiment, processing circuitry may include a virtual machine manager (VMM) to control specific guest linear address (GLA) translations. Control may be implemented in a performance sensitive and secure manner, and may be capable of improving performance for critical linear address page walks over legacy operation by removing some or all of the cost of page walking extended page tables (EPTs) for critical mappings. Alone or in combination with the above, certain portions of a page table structure may be selectively made immutable by a VMM or early boot process using a sub-page policy (SPP). For example, SPP may enable non-volatile kernel and/or user space code and data virtual-to-physical memory mappings to be made immutable (e.g., non-writable) while allowing for modifications to non-protected portions of the OS paging structures and particularly the user space.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: October 12, 2021
    Assignee: Intel Corporation
    Inventors: Ravi L. Sahita, Gilbert Neiger, Vedvyas Shanbhogue, David M. Durham, Andrew V. Anderson, David A. Koufaty, Asit K. Mallick, Arumugam Thiyagarajah, Barry E. Huntley, Deepak K. Gupta, Michael Lemay, Joseph F. Cihula, Baiju V. Patel
  • Patent number: 11144249
    Abstract: A storage system includes a processor configured to request a write operation of first data corresponding to a first logical address, and requests a write operation of second data corresponding to a second logical address, a memory module including a nonvolatile memory device configured to store the first data and the second data, and a controller configured to convert the first logical address into a first device logical address, and converts the second logical address into a second device logical address based on the first device logical address and a size of the first data, and a storage device configured to store the first data in the storage device based on the first device logical address, and store the second data in the storage device based on the second device logical address.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: October 12, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seung-Woo Lim, Kyu-Min Park, Do-Han Kim
  • Patent number: 11138133
    Abstract: Various embodiments are generally directed to the providing for mutual authentication and secure distributed processing of multi-party data. In particular, an experiment may be submitted to include the distributed processing of private data owned by multiple distrustful entities. Private data providers may authorize the experiment and securely transfer the private data for processing by trusted computing nodes in a pool of trusted computing nodes.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: October 5, 2021
    Assignee: INTEL CORPORATION
    Inventors: Hormuzd M. Khosravi, Baiju V. Patel
  • Patent number: 11113203
    Abstract: Provided herein may be a controller and a method of operating the same. The controller for controlling an operation of a semiconductor memory device may include a request analyzer, a map cache controller, and a command generator. The request analyzer receives a first request from a host. The map cache controller generates a first mapping segment including a plurality of mapping entries and a flag bit based on the first request, and sets a value of the flag bit depending on whether data corresponding to the first mapping segment is random data or sequential data. The command generator generates a program command for programming the mapping segment.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: September 7, 2021
    Assignee: SK hynix Inc.
    Inventor: Eu Joon Byun
  • Patent number: 11113209
    Abstract: An apparatus has a translation cache (100) comprising a number of entries for specifying address translation data. Each entry (260) also specifies a translation context identifier (254) associated with the address translation data and a realm identifier (270) identifying one of a number of realms. Each realm corresponds to at least a portion of at least one software process executed by processing circuitry (8). In response to a memory access a lookup of the translation cache (100) is triggered. When the lookup misses in the cache (100), control circuitry (280) prevents allocation of address translation data to the cache when the current realm is excluded from accessing the target memory region by an owner realm specified for the target memory region. In the lookup, whether a given entry (260) matches the memory access depends on both a translation context identifier comparison and a realm identifier comparison.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: September 7, 2021
    Assignee: Arm Limited
    Inventors: Matthew Lucien Evans, Jason Parker, Gareth Rhys Stockwell, Martin Weidmann
  • Patent number: 11106600
    Abstract: A processing system adjusts a cache replacement priority of cache lines at a cache based on evictions of entries mapping virtual-to-physical address translations from a translation lookaside buffer (TLB). Upon eviction of a TLB entry, the processing system identifies cache lines corresponding to the physical addresses of the evicted TLB entry and evicts the cache lines or adjusts the cache replacement priority of the cache lines so that their eviction from the cache will be accelerated.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: August 31, 2021
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Gabriel H. Loh, Paul Moyer
  • Patent number: 11080203
    Abstract: High performance data storage device is disclosed, which has a memory controller dynamically updating mapping information on the temporary storage to manage physical space information mapped to a logical address recognized by a host. The memory controller uses a first bit to an Nth bit of the physical space information to indicate a physical space of the non-volatile memory or a cache address of the data cache space, without using additional bits to map the physical space information to the non-volatile memory or the data cache space, where N is a number greater than one. Among numbers formed by the first to the Nth bit, the memory controller uses numbers corresponding to non-existent physical space of the non-volatile memory to map the physical space information to the non-volatile memory or the data cache space.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: August 3, 2021
    Assignee: SILICON MOTION, INC.
    Inventors: Yang-Chih Shen, Shih-Chang Chang
  • Patent number: 11061598
    Abstract: The present disclosure generally relates to relocating data in a storage device and updating a compressed logical to physical (L2P) table in response without invalidating cache entries of the L2P table. After relocating data from a first memory block associated with a first physical address to a second memory block associated with a second physical address, a version indicator of a cache entry corresponding to the first physical address in the L2P table is incremented. One or more cache entries are then added to the L2P table associating the relocated data to the second physical block without invaliding the cache entry corresponding to the first physical address. When a command to read or write the relocated data is received, the storage device searches the L2P table and reads the data from either the first memory block or the second memory block.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: July 13, 2021
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Alexander Bazarsky, Tomer Eliash, Yuval Grossman
  • Patent number: 11061572
    Abstract: Described are a method and processing apparatus to tag and track objects related to memory allocation calls. An application or software adds a tag to a memory allocation call to enable object level tracking. An entry is made into an object tracking table, which stores the tag and a variety of statistics related to the object and associated memory devices. The object statistics may be queried by the application to tune power/performance characteristics either by the application making runtime placement decisions, or by off-line code tuning based on a previous run. The application may add a tag to a memory allocation call to specify the type of memory characteristics requested based on the object statistics.
    Type: Grant
    Filed: April 22, 2016
    Date of Patent: July 13, 2021
    Assignee: Advanced Micro Devices, Inc.
    Inventors: David A. Roberts, Michael Ignatowski
  • Patent number: 11055232
    Abstract: A processor includes a translation lookaside buffer (TLB) to store a TLB entry, wherein the TLB entry comprises a first set of valid bits to identify if the first TLB entry corresponds to a virtual address from a memory access request, wherein the valid bits are set based on a first page size associated with the TLB entry from a first set of different page sizes assigned to a first probe group; and a control circuit to probe the TLB for each page size of the first set of different page sizes assigned to the first probe group in a single probe cycle to determine if the TLB entry corresponds to the virtual address from the memory access request.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: July 6, 2021
    Assignee: Intel Corporation
    Inventors: David Pardo Keppel, Binh Pham
  • Patent number: 11042317
    Abstract: A memory system includes a memory device including a first memory block and a second memory block; and a controller suitable for controlling the memory device, wherein the controller includes a sequential index calculator suitable for calculating a sequential index based on first logical block address (LBA) information and second LBA information that are written in the first memory block; an internal operation determining component suitable for determining whether an internal operation is to be performed on the first memory block, by comparing the sequential index of the first memory block with a threshold value; and an internal operation performing component suitable for migrating pieces of LBA information stored in the first memory block to the second memory block to rearrange the pieces of LBA information, when it is determined that the internal operation is to be performed on the first memory block.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: June 22, 2021
    Assignee: SK hynix Inc.
    Inventor: Eu-Joon Byun
  • Patent number: 11030012
    Abstract: Methods, apparatus, systems, and articles of manufacture for allocating a workload to an accelerator using machine learning are disclosed. An example apparatus includes a workload attribute determiner to identify a first attribute of a first workload and a second attribute of a second workload. An accelerator selection processor causes at least a portion of the first workload to be executed by at least two accelerators, accesses respective performance metrics corresponding to execution of the first workload by the at least two accelerators, and selects a first accelerator of the at least two accelerators based on the performance metrics. A neural network trainer trains a machine learning model based on an association between the first accelerator and the first attribute of the first workload. A neural network processor processes, using the machine learning model, the second attribute to select one of the at least two accelerators to execute the second workload.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: June 8, 2021
    Assignee: Intel Corporation
    Inventors: Divya Vijayaraghavan, Denica Larsen, Kooi Chi Ooi, Lady Nataly Pinilla Pico, Min Suet Lim
  • Patent number: 11003584
    Abstract: A data processing system includes support for sub-page granular memory tags. The data processing system comprises at least one core, a memory controller responsive to the core, random access memory (RAM) responsive to the memory controller, and a memory protection module in the memory controller. The memory protection module enables the memory controller to use a memory tag value supplied as part of a memory address to protect data stored at a location that is based on a location value supplied as another part of the memory address. The data processing system also comprises an operating system (OS) which, when executed in the data processing system, manages swapping a page of data out of the RAM to non-volatile storage (NVS) by using a memory tag map (MTM) to apply memory tags to respective subpages within the page being swapped out. Other embodiments are described and claimed.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: May 11, 2021
    Assignee: Intel Corporation
    Inventors: Kai Cong, Karanvir Grewal, Siddhartha Chhabra, Sergej Deutsch, David Michael Durham
  • Patent number: 10997071
    Abstract: Devices and techniques for enhanced flush transfer efficiency in a storage device are described herein. A flush trigger for a user data write can be identified. Here, user data corresponds to the user data write and was stored in a buffer. The size of the user data stored in the buffer is smaller than a write width for a storage device subject to the write. The difference ins the user data size in the buffer and the write width is buffer free space. Additional data can be marshalled in response to the identification of the flush trigger. Here, the additional data size is less than or equal to the buffer free space. The user data and the additional data can then be written to the storage device.
    Type: Grant
    Filed: November 27, 2018
    Date of Patent: May 4, 2021
    Assignee: Micron Technology, Inc.
    Inventor: David Aaron Palmer
  • Patent number: 10997019
    Abstract: The system receives a request to write a first piece of data to a non-volatile memory. The system encodes, based on an error correction code (ECC), the first piece of data to obtain a first ECC codeword which includes a plurality of ordered parts and a first parity. The system writes the plurality of ordered parts in multiple rows. The system writes the first parity to a same row in which a starting ordered part is written. The system updates, in a data structure, entries associated with the ordered parts. A respective entry indicates: a virtual address associated with a respective ordered part, a physical address at which the respective ordered part is written, and an index corresponding to a virtual address associated with a next ordered part. A first entry associated with the starting ordered part further indicates a physical address at which the first parity is written.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: May 4, 2021
    Assignee: Alibaba Group Holding Limited
    Inventor: Shu Li
  • Patent number: 10990538
    Abstract: A TLB receives an access request with respect to a first address and access authorization assigned to the request from an arithmetic operation control unit, translates the first address to a second address, determines the suitability of the access authorization, and outputs the access request with respect to the first address when the access authorization is not suitable. An MMU receives the access request with respect to the first address output from the TLB, translates the first address to the second address, determines the suitability of the access authorization, and outputs a notification of access prohibition to the arithmetic operation control unit when the access authorization is not suitable.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: April 27, 2021
    Assignee: FUJITSU LIMITED
    Inventor: Masakazu Tanomoto
  • Patent number: 10977192
    Abstract: Disclosed herein is an apparatus configured to log transactions of a translation lookaside buffer (TLB) into a software-accessible buffer. The apparatus includes a memory management unit (MMU) configured to translate a logical memory address to a physical memory address for accessing a physical memory. The apparatus also includes a TLB configured to store a plurality of entries, where each entry includes a logical memory page address and an associated physical memory page address. The apparatus further includes a software-accessible buffer and a TLB event logging circuit configured to detect an event associated with an entry of the TLB and store information regarding the detected event in the software-accessible buffer.
    Type: Grant
    Filed: April 8, 2016
    Date of Patent: April 13, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Adi Habusha, Nafea Bshara
  • Patent number: 10956341
    Abstract: An address translation facility is provided for multiple virtualization levels, where a guest virtual address may be translated to a guest non-virtual address, the guest non-virtual address corresponding without translation to a host virtual address, and the host virtual address may be translated to a host non-virtual address, where translation within a virtualization level may be specified as a sequence of accesses to address translation tables. The address translation facility may include a first translation engine and a second translation engine, where the first and second translation engines each have capacity to perform address translation within a single virtualization level of the multiple virtualization levels.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: March 23, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Uwe Brandt, Markus Helms, Christian Jacobi, Markus Kaltenbach, Thomas Koehler, Frank Lehnert
  • Patent number: 10956327
    Abstract: Disclosed embodiments relate to systems and methods structured to mitigate cache conflicts through hardware assisted redirection of pages. In one example, a processor includes a translation cache to store a physical to slice mapping in response to a cache conflict mitigation request corresponding to a page; and a cache controller to determine whether the translation cache comprises the physical to slice mapping; determine whether one of a plurality of slices in a translation table comprises the physical to slice mapping if the translation cache does not comprise the physical to slice mapping, the translation table communicably coupled to a non-volatile memory; and if the translation table does not comprise the physical to slice mapping, redirect the cache conflict mitigation request to the non-volatile memory; and allocate a new physical to slice mapping for the page to one of the plurality of slices in the translation table.
    Type: Grant
    Filed: June 29, 2019
    Date of Patent: March 23, 2021
    Assignee: Intel Corporation
    Inventors: Adithya Nallan Chakravarthi, Anant Vithal Nori, Jayesh Gaur, Sreenivas Subramoney
  • Patent number: 10936507
    Abstract: In one embodiment, an apparatus includes: a page table circuit to receive a virtual address and to generate at least a portion of a physical address therefrom; and a mapping rule table coupled to the page table circuit, the mapping rule table to receive mapping metadata of a page of a system memory and, based on the mapping metadata, output a mapping rule for the page. Other embodiments are described and claimed.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: March 2, 2021
    Assignee: Intel Corporation
    Inventors: Vivek Kozhikkottu, Esha Choukse, Shankar Ganesh Ramasubramanian, Melin Dadual, Suresh Chittor
  • Patent number: 10915456
    Abstract: A method and an information handling system having a plurality of processors connected by a cross-processor network, where each of the plurality of processors preferably has a filter construct having an outgoing filter list that identifies logical partition identifications (LPIDs) that are exclusively assigned to that processor and/or an incoming filter list that identifies LPIDs on that processor and at least one additional processor in the system. In operation, if the LPID of the outgoing translation invalidation instruction is on the outgoing filter list, the address translation invalidation instruction is acknowledged on behalf of the system. If the LPID of the incoming invalidation instruction does not match any LPID on the incoming filter list, then the translation invalidation instruction is acknowledged, and if the LPID of the incoming invalidation instruction matches any LPID on the incoming filter list, then the invalidation instruction is sent into the respective processor.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: February 9, 2021
    Assignee: International Business Machines Corporation
    Inventors: Debapriya Chatterjee, Bryant Cockcroft, Larry Leitner, John A. Schumann, Karen Yokum
  • Patent number: 10915459
    Abstract: A computer system includes a translation lookaside buffer (TLB) and a processor. The TLB comprises a first TLB array and a second TLB array, and stores entries comprising virtual address information and corresponding real address information. The processor is configured to receive a first virtual address for translation, and to concurrently determine if the TLB stores a physical address associated with the first virtual address based on a first portion and a second portion of the first virtual address. The first portion is associated with a first page size and the second portion is associated with a second page size (different from the first page size). The first portion is used to perform lookup in either one of the first TLB array and the second TLB array and the second portion is used for performing lookup in other one of the first TLB array and the second TLB array.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: February 9, 2021
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Dwain A. Hicks
  • Patent number: 10901632
    Abstract: The subject technology provides for managing a data storage system. A host write command to write host data associated with a logical address to a non-volatile memory is received. A first physical address in the non-volatile memory mapped to the logical address in an address mapping table is determined. An indicator that the first physical address is bad checked. If the first physical address is indicated as bad, a valid count associated with a first set of physical addresses at a current value is maintained. The first set of physical addresses comprises the first physical address. If the first physical address is not indicated as bad, the first physical address is marked as invalid. The valid count associated with the first set of physical addresses is decremented.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: January 26, 2021
    Assignee: WESTERN DITIGAL TECHNOLOGIES, INC.
    Inventors: Kai-Lung Cheng, Yun-Tzuo Lai, Eugene Lisitsyn, Jerry Lo, Subhash Balakrishna Pillai
  • Patent number: 10860231
    Abstract: An operating method of a memory system which includes a controller including one or more processors and a memory device including a plurality of memory blocks, the operating method comprises receiving a first write command; checking whether there is available storage space in a zeroth map segment, using a location of first logical block address (LBA) information written to the zeroth map segment; determining a pattern of the first LBA information and second LBA information corresponding to a first write command when there is no storage space in the zeroth map segment; increasing a sequential count for the second LBA information when the pattern of the first and second LBA information is determined to be a sequential pattern; and performing a map updating operation on a memory block of the memory device by variably adjusting a size of the zeroth map segment based on one or more pieces of LBA information.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: December 8, 2020
    Assignee: SK hynix Inc.
    Inventor: Eu-Joon Byun
  • Patent number: 10853262
    Abstract: Memory address translation apparatus comprises page table access circuitry to access a page table to retrieve translation data; a translation data buffer to store one or more instances of the translation data, comprising: an array of storage locations arranged in rows and columns; a row buffer comprising a plurality of entries and comparison circuitry responsive to a key value dependent upon at least the initial memory address, to compare the key value with information stored in each of at least one key entry and an associated value entry for storing at least a representation of a corresponding output memory address, and to identify which of the at least one key entry, if any, is a matching key entry storing information matching the key value; and output circuitry to output, when there is a matching key entry, at least the representation of the output memory address.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: December 1, 2020
    Assignee: ARM Limited
    Inventors: Nikos Nikoleris, Andreas Lars Sandberg, Prakash S. Ramrakhyani, Stephan Diestelhorst
  • Patent number: 10848458
    Abstract: A method including providing: a switching device including a main mapping unit configured to provide a main mapping which maps virtual addresses to direct addresses; management logic configured to store a connection tracking table stored in memory and configured for storing a plurality of connection mappings each including a virtual-to-direct mapping from a virtual address to a direct address; and a migrated connection table stored in memory and configured for storing a plurality of migrated connection mappings each including a virtual-to-migrated-direct mapping from a virtual address to a migrated direct address.
    Type: Grant
    Filed: November 18, 2018
    Date of Patent: November 24, 2020
    Assignee: MELLANOX TECHNOLOGIES TLV LTD.
    Inventors: Alan Lo, Matty Kadosh, Otniel Van Handel, Yonatan Piasetzky, Marian Pritsak, Omer Shabtai
  • Patent number: 10810082
    Abstract: A system and method for improved redundancy in storage devices are disclosed. The method includes receiving a first data block for writing to a storage device; writing the first block to a journal connected with the storage device; associating a first logical address of a group of logical addresses of the journal with a first physical address of the storage; associating the first physical address to an additional second logical address of the storage device, the second logical address not of the group of logical addresses of the journal; and disassociating the first physical address from the first logical address, in response to associating the first physical address with the additional second logical address.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: October 20, 2020
    Assignee: Excelero Storage Ltd.
    Inventors: Yaniv Romem, Ofer Oshri, Omri Mann, Kirill Shoikhet, Daniel Herman Shmulyan, James Jackson
  • Patent number: 10810144
    Abstract: A method includes: providing a DDR interface between a host memory controller and a memory module; and providing a message interface between the host memory controller and the memory module. The memory module includes a non-volatile memory and a DRAM configured as a DRAM cache of the non-volatile memory. Data stored in the non-volatile memory of the memory module is asynchronously accessible by a non-volatile memory controller of the memory module, and data stored in the DRAM cache is directly and synchronously accessible by the host memory controller.
    Type: Grant
    Filed: October 4, 2016
    Date of Patent: October 20, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sun Young Lim, Mu-Tien Chang, Dimin Niu, Hongzhong Zheng, Indong Kim
  • Patent number: 10802936
    Abstract: In one example a system includes a memory, and at least one memory controller to: detect a failed first memory location of the memory, remap the failed first location of the memory to a spare second location of the memory based on a pointer stored at the failed first memory location, and wear-level the memory. To wear-level the memory, the memory controller may copy data from the spare second location of the memory to a third location of the memory, and keep the pointer in the failed first memory location.
    Type: Grant
    Filed: September 14, 2015
    Date of Patent: October 13, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Gregg B Lesartre, Ryan Akkerman, Joseph F Orth
  • Patent number: 10783083
    Abstract: A cache memory is organized into a plurality of ways and a plurality of address lines. In response to a miss, the cache memory selects a way of the plurality of ways based on a first control variable indicating a way of the plurality of ways and a set of second control variables associated with the address line and with respective ways. Data associated with the miss is written to the selected way. Second control variables associated with other ways are reset if all of the second control variables indicate the associated way was recently replaced. The second control variable associated with the selected way is set to indicate the selected way was recently replaced. The first control variable is set to indicate the selected way. Current values of the first control variable and of the set of second control variables are maintained in the event of a hit.
    Type: Grant
    Filed: February 5, 2019
    Date of Patent: September 22, 2020
    Assignee: STMICROELECTRONICS (BEIJING) RESEARCH & DEVELOPMENT CO. LTD
    Inventor: Xiao Kang Jiao
  • Patent number: 10782918
    Abstract: Methods, systems, and devices for near-memory data-dependent gathering and packing of data stored in a memory. A processing device extracts a function, a memory source address, and a memory destination address from a near-memory data-dependent gathering and packing primitive. A signal to perform gathering and packing operations based on the primitive is sent to near-memory processing circuitry of a memory device. The near-memory processing circuitry receives the signal, gathers data from the memory device based on the function and the memory source address, and packs the gathered data into the memory device based on the memory destination address.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: September 22, 2020
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Shaizeen Aga, Nuwan Jayasena