Directory Tables (e.g., Dlat, Tlb) Patents (Class 711/207)
  • Patent number: 12235795
    Abstract: In some examples, a system receives workload information of a workload collection, and applies a machine learning model on the workload information, the machine learning model trained using training information including features of different types of workloads. The system produces, by the machine learning model, an identification of a first file system from among different types of file systems, the machine learning model producing an output value corresponding to the first file system that is a candidate for use in storing files of the workload collection.
    Type: Grant
    Filed: July 29, 2022
    Date of Patent: February 25, 2025
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Sagar Venkappa Nyamagouda, Smitha Jayaram, Hiro Rameshlal Lalwani, Rachit Gupta, Sherine Jacob, Anand Andaneppa Ganjihal
  • Patent number: 12222857
    Abstract: According to one embodiment, a memory system includes a non-volatile memory and a data map configured to manage validity of data written in the non-volatile memory. The data map includes a plurality of first fragment tables corresponding to a first hierarchy and a second fragment table corresponding to a second hierarchy higher than the first hierarchy. Each of the first fragment tables is used to manage the validity of each data having a predetermined size written in a range of physical address in the non-volatile memory allocated to the first fragment table. The second fragment table is used for each of the first fragment tables to manage reference destination information for referencing the first fragment table.
    Type: Grant
    Filed: August 29, 2023
    Date of Patent: February 11, 2025
    Assignee: KIOXIA CORPORATION
    Inventors: Yuki Sasaki, Shinichi Kanno, Takahiro Kurita
  • Patent number: 12182576
    Abstract: A processor core includes a storage device which stores a composite very large instruction word (VLIW) instruction, an instruction unit which obtains the composite VLIW instruction from the storage device and decodes the composite VLIW instruction to determine an operation to perform, and a composite VLIW instruction execution unit which executes the composite VLIW instruction to perform the operation.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: December 31, 2024
    Assignee: International Business Machines Corporation
    Inventors: Bruce M. Fleischer, Thomas Winters Fox, Arpith C. Jacob, Hans Mikael Jacobson, Ravi Nair, Kevin John Patrick O'Brien, Daniel Arthur Prener
  • Patent number: 12174739
    Abstract: Various embodiments are generally directed to virtualized systems. A first guest memory page may be identified based at least in part on a number of accesses to a page table entry for the first guest memory page in a page table by an application executing in a virtual machine (VM) on the processor, the first guest memory page corresponding to a first byte-addressable memory. The execution of the VM and the application on the processor may be paused. The first guest memory page may be migrated to a target memory page in a second byte-addressable memory, the target memory page comprising one of a target host memory page and a target guest memory page, the second byte-addressable memory having an access speed faster than an access speed of the first byte-addressable memory.
    Type: Grant
    Filed: December 21, 2023
    Date of Patent: December 24, 2024
    Assignee: Intel Corporation
    Inventors: Yao Zu Dong, Kun Tian, Fengguang Wu, Jingqi Liu
  • Patent number: 12141075
    Abstract: In one example of the present technology, an input/output memory management unit (IOMMU) of a computing device is configured to: receive a prefetch message including a virtual address from a central processing unit (CPU) core of a processor of the computing device; perform a page walk on the virtual address through a page table stored in a main memory of the computing device to obtain a prefetched translation of the virtual address to a physical address; and store the prefetched translation of the virtual address to the physical address in a translation lookaside buffer (TLB) of the IOMMU.
    Type: Grant
    Filed: June 9, 2022
    Date of Patent: November 12, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ramakrishna Huggahalli, Shachar Raindel
  • Patent number: 12131063
    Abstract: Methods and apparatus offload tiered memories management. The method includes obtaining a pointer to a stored memory management structure associated with tiered memories, where the memory management structure includes a plurality of memory management entries and each memory management entry of the plurality of memory management entries includes information for a memory section in one of the tiered memories. In some instances, the method includes scanning at least a part of the plurality of memory management entries. In certain instances, the method includes generating a memory profile list, where the memory profile list includes a plurality of profile entries and each profile entry of the plurality of profile entries corresponding to a scanned memory management entry in the memory management structure.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: October 29, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Kevin M. Lepak
  • Patent number: 12130738
    Abstract: An embodiment of an integrated circuit may comprise, coupled to a core, a hardware decompression accelerator, a compressed cache, a processor and communicatively coupled to the hardware decompression accelerator and the compressed cache, and memory and communicatively coupled to the processor, wherein the memory stores microcode instructions which when executed by the processor causes the processor to store a first address to a decompression work descriptor, retrieve a second address where a compressed page is stored in the compressed cache from the decompression work descriptor at the first address in response to an indication of a page fault, and send instructions to the hardware decompression accelerator to decompress the compressed page at the second address. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: October 29, 2024
    Assignee: Intel Corporation
    Inventors: Vedvyas Shanbhogue, Jayesh Gaur, Wajdi K. Feghali, Vinodh Gopal, Utkarsh Kakaiya
  • Patent number: 12124381
    Abstract: A processing system includes a hardware translation lookaside buffer (TLB) retry loop that retries virtual memory address to physical memory address translation requests from a software client independent of a command from the software client. In response to a retry response notification at the TLB, a controller of the TLB waits for a programmable delay period and then retries the request without involvement from the software client. After a retry results in a hit at the TLB, the controller notifies the software client of the hit. Alternatively, if a retry results in an error at the TLB, the controller notifies the software client of the error and the software client initiates error handling.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: October 22, 2024
    Assignee: ATI Technologies ULC
    Inventor: Edwin Pang
  • Patent number: 12124709
    Abstract: The present application discloses a computing system and an associated method. The computing system includes a first host, a second host, a first memory extension device and a second memory extension device. The first host includes a first memory, and the second host includes a second memory. The first host has a plurality of first memory addresses corresponding to a plurality of memory spaces of the first memory, and a plurality of second memory addresses corresponding to a plurality of memory spaces of the second memory. The first memory extension device is coupled to the first host. The second memory extension device is coupled to the second host and the first memory extension device. The first host accesses the plurality of memory spaces of the second memory through the first memory extension device and the second memory extension device.
    Type: Grant
    Filed: December 12, 2022
    Date of Patent: October 22, 2024
    Assignee: ALIBABA (CHINA) CO., LTD.
    Inventors: Tianchan Guan, Yijin Guan, Dimin Niu, Hongzhong Zheng
  • Patent number: 12105634
    Abstract: A processing system includes a translation lookaside buffer (TLB). The TLB includes a plurality of TLB entries that are configured to store requested page size indications. The TLB is configured to be indexed via the requested page size indications such that a plurality of TLB requests that each indicate a same virtual address, but different respective requested page sizes are allocated respective TLB entries. As a result, in response to a TLB request that indicates a requested page size and has a virtual address that corresponds to multiple TLB entries, only a single TLB entry is identified as a TLB hit.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: October 1, 2024
    Assignee: ATI TECHNOLOGIES ULC
    Inventors: Edwin Pang, Jimshed Mirza
  • Patent number: 12093186
    Abstract: Process dedicated in-memory translation lookaside buffers (TLBs) (mTLBs) for augmenting a memory management unit (MMU) TLB for translating virtual addresses (VAs) to physical addresses (PA) in a processor-based system is disclosed. In disclosed examples, a dedicated in-memory TLB is supported in system memory for each process so that one process's cached page table entries do not displace another process's cached page table entries. When a process is scheduled to execute in a central processing unit (CPU), the in-memory TLB address stored for such process can be used by page table walker circuit in the CPU MMU to access the dedicated in-memory TLB for executing the process to perform VA to PA translations in the event of a TLB miss to the MMU TLB. If a TLB miss occurs to the in-memory TLB, the page table walker circuit can walk the page table in the MMU.
    Type: Grant
    Filed: September 27, 2023
    Date of Patent: September 17, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Madhavan Thirukkurungudi Venkataraman, Thomas Philip Speier
  • Patent number: 12056045
    Abstract: Aspects of a data storage device are provided that optimize utilization of a scratchpad memory. The data storage device includes an NVM and a controller which allocates a memory location of the NVM as scratchpad memory for a host. The controller receives a command including data from a submission queue associated with the scratchpad memory, stores the data in the scratchpad memory, and disables first updates to the L2P mapping table for the data in the scratchpad memory across power cycles. The controller also receives commands from other submission queues for other memory locations than the scratchpad memory, stores data in the other memory locations, and stores second updates to a L2P mapping table. The first and second updates may include different data lengths. Thus, the device accounts for differences between scratchpad memory and NVM in at least data alignment, L2P granularity, and response, resulting in efficient scratchpad memory management.
    Type: Grant
    Filed: June 15, 2022
    Date of Patent: August 6, 2024
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventor: Ramanathan Muthiah
  • Patent number: 12045660
    Abstract: Method and apparatus for process accelerator (PA) using configurable hardware accelerators is provided. The PA can include a plurality of processing elements (PEs). The PEs of the PA can be used to accelerate a process and/or one or more threads. PEs can include PE local memory which due to the memories' close physical proximity to the PE can result in reduced energy consumption. The plurality of PEs can be daisy-chain connected or DMA mode can be used to write the result of a PE directly into the PE local memory of another PE for further processing.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: July 23, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Chang Hoon Lee, Paul Alepin, John Edward Vincent
  • Patent number: 12047201
    Abstract: A binding and configuration method for a bus adapter and a channel, a bus adapter, and a connection system are provided. The method includes: configuring a mapping table of a mapping manager; associating a logical channel with a corresponding hardware channel based on the mapping table; and connecting the logical channel to the corresponding hardware channel for data communication. A common architecture for the application program to access bus adapter resources is realized. The application programs using this architecture can arbitrarily configure the bus adapter model and the hardware channel that need to be connected, and the mapping relationship takes effect immediately after each configuration change without modifying the user's software.
    Type: Grant
    Filed: July 18, 2023
    Date of Patent: July 23, 2024
    Assignee: Shanghai TOSUN Technology Ltd.
    Inventors: Chu Liu, Yueyin Xie, Mang Mo
  • Patent number: 12047200
    Abstract: A configuration method for a mapping table includes: configuring an application program name, a channel type, a logical channel index, a hardware type, a hardware index and a hardware channel index in the mapping table. The application program name is a string name of an application program using the mapping manager; the channel type is a type of the logical channel that the mapping manager is responsible for mapping; the logical channel index is a logical channel resource number of the application program; the hardware type is a type of a manufacturer of a corresponding bus adapter and a model of the bus adapter; the hardware index is a number of distinguishing a hardware of a same type after the corresponding bus adapter is inserted; and the hardware channel index is a number of hardware channels of a same type in the corresponding bus adapter.
    Type: Grant
    Filed: July 18, 2023
    Date of Patent: July 23, 2024
    Assignee: Shanghai TOSUN Technology Ltd.
    Inventors: Chu Liu, Yueyin Xie, Mang Mo
  • Patent number: 12046319
    Abstract: A redundancy managing method and apparatus for semiconductor memories is disclosed. The redundancy managing method for semiconductor memories utilizes bitmap type storage by defining an appropriate storage space according to the type of a fault.
    Type: Grant
    Filed: August 29, 2022
    Date of Patent: July 23, 2024
    Assignees: SK hynix Inc., Korea University Research and Business Foundation
    Inventors: Jong Sun Park, Kwan Ho Bae, Jun Hyun Song
  • Patent number: 12001370
    Abstract: A device in an interconnect network is provided. The device comprises an end point processor comprising end point memory and an interconnect network link in communication with an interconnect network switch. The device is configured to issue, by the end point processor, a request to send data from the end point memory to other end point memory of another end point processor of another device in the interconnect network and provide, to the interconnect network switch, the request using memory addresses from a global memory address map which comprises a first global memory address range for the end point processor and a second global memory address range for the other end point processor.
    Type: Grant
    Filed: December 30, 2021
    Date of Patent: June 4, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Brock A. Taylor
  • Patent number: 11940912
    Abstract: A logical-to-physical (L2P) table is maintained, wherein a plurality of sections of the L2P table is cached in a volatile memory device. A total dirty count for the L2P table is maintained, wherein the total dirty count reflects a total number of updates to the L2P table. Respective section dirty counts for the plurality of sections are maintained, wherein each respective section dirty count reflects a total number of updates to a corresponding section. It is determined that the total dirty count for the L2P table satisfies a threshold criterion. In response to determining that the total dirty count for the L2P table satisfies the threshold criterion, a first section of the plurality of sections is identified based on the respective section dirty counts. The first section of the L2P table is written to a non-volatile memory device.
    Type: Grant
    Filed: March 1, 2022
    Date of Patent: March 26, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Byron Harris, Daniel Boals, Abedon Madril
  • Patent number: 11941252
    Abstract: Provided are methods, apparatuses, systems, and computer-readable storage media for reducing an open time of a solid-state drive (SSD). In an embodiment, a method includes dividing a logical-to-physical (L2P) address mapping table of the SSD into a plurality of segments. The method further includes assigning one journal buffer of a plurality of journal buffers to each segment of the plurality of segments. The method further includes recreating, during a power on sequence of the SSD, a portion of the plurality of segments by replaying a first subset of the plurality of journal buffers. The method further includes sending, to a host device, a device-ready signal upon successful recreation of the portion of the plurality of segments. The method further includes recreating, in a background mode, a remaining portion of the plurality of segments by replaying a second subset of the plurality of journal buffers.
    Type: Grant
    Filed: August 9, 2022
    Date of Patent: March 26, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Tushar Tukaram Patil, Anantha Sharma, Sharath Kumar Kodase, Suman Prakash Balakrishnan
  • Patent number: 11934320
    Abstract: A type of translation lookaside buffer (TLB) invalidation instruction is described which specifically targets a first type of TLB which stores combined stage-1-and-2 entries which depend on both stage 1 translation data and the stage 2 translation data, and which is configured to ignore a TLB invalidation command which invalidates based on a first set of one or more invalidation conditions including an address-based invalidation condition depending on matching of intermediate address. A second type of TLB other than the first type ignores the invalidation command triggered by the first type of TLB invalidation instruction. This approach helps to limit the performance impact of stage 2 invalidations in systems supporting a combined stage-1-and-2 TLB which cannot invalidate by intermediate address.
    Type: Grant
    Filed: August 26, 2020
    Date of Patent: March 19, 2024
    Assignee: Arm Limited
    Inventor: Andrew Brookfield Swaine
  • Patent number: 11929927
    Abstract: A network interface controller can be programmed to direct write received data to a memory buffer via either a host-to-device fabric or an accelerator fabric. For packets received that are to be written to a memory buffer associated with an accelerator device, the network interface controller can determine an address translation of a destination memory address of the received packet and determine whether to use a secondary head. If a translated address is available and a secondary head is to be used, a direct memory access (DMA) engine is used to copy a portion of the received packet via the accelerator fabric to a destination memory buffer associated with the address translation. Accordingly, copying a portion of the received packet through the host-to-device fabric and to a destination memory can be avoided and utilization of the host-to-device fabric can be reduced for accelerator bound traffic.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: March 12, 2024
    Assignee: Intel Corporation
    Inventors: Pratik M. Marolia, Rajesh M. Sankaran, Ashok Raj, Nrupal Jani, Parthasarathy Sarangam, Robert O. Sharp
  • Patent number: 11914865
    Abstract: A method and system are provided for limiting unnecessary data traffic on the data busses connecting the various levels of system memory. Some embodiments may include processing an invalidation command associated with a system or network operation requiring temporary storage of data in a local memory area. The invalidation command may comprise a memory location indicator capable of identifying the physical addresses of the associated data in the local memory area. Some embodiments may preclude the data associated with the system or network operation from being written to a main memory by invalidating the memory locations holding the temporary data once the system or network operation has finished utilizing the local memory area.
    Type: Grant
    Filed: April 11, 2022
    Date of Patent: February 27, 2024
    Assignee: Mellanox Technologies, Ltd.
    Inventors: Yamin Friedman, Idan Burstein, Hillel Chapman, Gal Yefet
  • Patent number: 11907301
    Abstract: A control table (22) defines information for controlling a processing component (20) to perform an operation. The table (22) comprises entries each corresponding to a variable size region defined by a first limit address and one of a second limit address and size. A binary search procedure is provided for looking up the table, comprising a number of search window narrowing steps, each narrowing a current search window of candidate entries to a narrower search window comprising fewer entries, based on a comparison of a query address against the first limit address of a selected candidate entry of the current search window. The comparison is independent of the second limit address or size of the selected candidate entry. After the search window is narrowed to a single entry, the query address is compared with the second limit address or size of that single entry.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: February 20, 2024
    Assignee: Arm Limited
    Inventors: Thomas Christopher Grocutt, François Christopher Jacques Botman
  • Patent number: 11886450
    Abstract: A statistical data processing device includes: a first statistical image generation unit for generating statistical images including a first statistical image representing a first statistical value as a corresponding pixel value, and a second statistical image representing a second statistical value as a corresponding pixel value; a mask generation unit for generating a mask image, the mask image extracting, if one of a pixel of a first statistical image and a corresponding pixel of a second statistical image does not have a pixel value indicating a statistical value, a pixel not having a pixel value indicating the statistical value or other pixel; and a second statistical image generation unit for generating a third statistical image in which a pixel value of a pixel not having a pixel value indicating the statistical value is complemented with a pixel value of the other statistical image.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: January 30, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Xiaojun Wu, Masaki Kitahara, Atsushi Shimizu
  • Patent number: 11853225
    Abstract: A method includes receiving, by a memory management unit (MMU) comprising a translation lookaside buffer (TLB) and a configuration register, a request from a processor core to directly modify an entry in the TLB. The method also includes, responsive to the configuration register having a first value, operating the MMU in a software-managed mode by modifying the entry in the TLB according to the request. The method further includes, responsive to the configuration register having a second value, operating the MMU in a hardware-managed mode by denying the request.
    Type: Grant
    Filed: October 12, 2020
    Date of Patent: December 26, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Timothy D. Anderson, Joseph Raymond Michael Zbiciak, Kai Chirca, Daniel Brad Wu
  • Patent number: 11847064
    Abstract: A method and system of translating addresses is disclosed that includes receiving an effective address for translation, providing a processor and a translation buffer where the translation buffer has a plurality of entries, wherein each entry contains a mapping of an effective address directly to a corresponding real address, and information on a corresponding intermediate virtual address. The method and system further include determining whether the translation buffer has an entry matching the effective address, and in response to the translation buffer having an entry with a matching effective address, providing the real address translation from the entry having the matching effective address.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: December 19, 2023
    Assignee: International Business Machines Corporation
    Inventor: David Campbell
  • Patent number: 11838403
    Abstract: The present techniques may provide improved processing and functionality of performance of the 128-bit AES Algorithm, which may provide improved power consumption. For example, in an embodiment, an encryption and decryption apparatus may comprise memory storing a current state matrix of an encryption or decryption process and a plurality of multiplexers configured to receive from the memory current elements of the state matrix stored in the memory, perform a cyclic shift on the received elements of the state matrix, and transmit the shifted elements to the memory for storage as a new state matrix.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: December 5, 2023
    Assignee: BOARD OF REGENTS, THE UNIVERSITY OF TEXAS SYSTEM
    Inventors: Alekhya Muthineni, Eugene John
  • Patent number: 11775422
    Abstract: Methods, systems, and devices for logic remapping techniques are described. A memory system may receive a write command to store information at a first logical address of the memory system. The memory system may generate a first entry of a logical-to-physical mapping that maps the first logical address with a first physical address that stores the information. The memory system may perform a defragmentation operation or other remapping operation. In such a defragmentation operation, the memory system may remap the first logical address to a second logical address, such that the second logical address is mapped to the first physical address. The memory system may generate a second entry of a logical-to-logical mapping that maps the first logical address with the second logical address.
    Type: Grant
    Filed: August 11, 2021
    Date of Patent: October 3, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Jonathan S. Parry, David Aaron Palmer, Giuseppe Cariello
  • Patent number: 11757677
    Abstract: A binding and configuration method for a bus adapter and a channel, a mapping manager, and a connection system are provided. The binding and configuration method for the bus adapter and the channel includes: configuring a mapping table of a mapping manager; associating a logical channel with a corresponding hardware channel based on the mapping table; and connecting the logical channel to the corresponding hardware channel for data communication. A common architecture for the application program to access bus adapter resources is realized. The application programs using this architecture can arbitrarily configure the bus adapter model and the hardware channel that need to be connected, and the mapping relationship takes effect immediately after each configuration change without modifying the user's software, thus improving the efficiency of the application program development and reducing the possibility of errors.
    Type: Grant
    Filed: August 7, 2022
    Date of Patent: September 12, 2023
    Assignee: Shanghai TOSUN Technology Ltd.
    Inventors: Chu Liu, Yueyin Xie, Mang Mo
  • Patent number: 11733904
    Abstract: Embodiments of the invention provide systems and methods for managing processing, memory, storage, network, and cloud computing to significantly improve the efficiency and performance of processing nodes. More specifically, embodiments of the present invention are directed to a hardware-based processing node of an object memory fabric.
    Type: Grant
    Filed: January 24, 2022
    Date of Patent: August 22, 2023
    Assignee: Ultrata, LLC
    Inventors: Steven J. Frank, Larry Reback
  • Patent number: 11709782
    Abstract: Circuitry comprises a translation lookaside buffer to store memory address translations, each memory address translation being between an input memory address range defining a contiguous range of one or more input memory addresses in an input memory address space and a translated output memory address range defining a contiguous range of one or more output memory addresses in an output memory address space; in which the translation lookaside buffer is configured selectively to store the memory address translations as a cluster of memory address translations, a cluster defining memory address translations in respect of a contiguous set of input memory address ranges by encoding one or more memory address offsets relative to a respective base memory address; memory management circuitry to retrieve data representing memory address translations from a memory, for storage by the translation lookaside buffer, when a required memory address translation is not stored by the translation lookaside buffer; detector circ
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: July 25, 2023
    Assignee: Arm Limited
    Inventors: Paolo Monti, Abdel Hadi Moustafa, Albin Pierrick Tonnerre, Vincenzo Consales, Abhishek Raja
  • Patent number: 11693790
    Abstract: Methods, apparatus, systems and articles of manufacture to facilitate write miss caching in cache system are disclosed. An example apparatus includes a first cache storage; a second cache storage, wherein the second cache storage includes a first portion operable to store a first set of data evicted from the first cache storage and a second portion; a cache controller coupled to the first cache storage and the second cache storage and operable to: receive a write operation; determine that the write operation produces a miss in the first cache storage; and in response to the miss in the first cache storage, provide write miss information associated with the write operation to the second cache storage for storing in the second portion.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: July 4, 2023
    Assignee: Texas Instmments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Patent number: 11656999
    Abstract: An electronic device may include a processor, a first volatile memory, and a storage including a nonvolatile memory and a second volatile memory. The processor may be configured to: identify information of a specific file and a kind of a request for data included in the specific file in response to a creation of the request for the data, set a flag in the request based on the identified information of the specific file, identify whether mapping information of a specific region including a logical address of the data among mapping information in which logical addresses and physical addresses for the nonvolatile memory are mapped onto each other is stored in the first volatile memory, determine whether to manage the mapping information of the specific region using the first volatile memory, and determine whether to update the mapping information of the specific region in the first volatile memory.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: May 23, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Manjong Lee, Hyeongjun Kim, Changheun Lee, Jintae Jang
  • Patent number: 11625323
    Abstract: Methods, systems, and devices for session-based memory operation are described. A memory system may determine that a logical address targeted by a read command is associated with a session table. The memory system may write the session table to a cache based on the logical address being associated with the session table. After writing the session table to the cache, the memory system may use the session table to determine one or more logical-to-physical (L2P) tables and write the one or more L2P tables to the cache. The memory system may use the L2L tables to perform address translation for logical addresses.
    Type: Grant
    Filed: December 7, 2020
    Date of Patent: April 11, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Sharath Chandra Ambula, Sushil Kumar, David Aaron Palmer, Venkata Kiran Kumar Matturi, Sri Ramya Pinisetty
  • Patent number: 11620220
    Abstract: A cache memory system including a primary cache and an overflow cache that are searched together using a search address. The overflow cache operates as an eviction array for the primary cache. The primary cache is addressed using bits of the search address, and the overflow cache is addressed by a hash index generated by a hash function applied to bits of the search address. The hash function operates to distribute victims evicted from the primary cache to different sets of the overflow cache to improve overall cache utilization. A hash generator may be included to perform the hash function. A hash table may be included to store hash indexes of valid entries in the primary cache. The cache memory system may be used to implement a translation lookaside buffer for a microprocessor.
    Type: Grant
    Filed: December 12, 2014
    Date of Patent: April 4, 2023
    Assignee: VIA ALLIANCE SEMICONDUCTOR CO., LTD.
    Inventors: Colin Eddy, Rodney E. Hooker
  • Patent number: 11615032
    Abstract: A data processing system (2) including one or more transaction buffers (16, 18, 20) storing address translation data executes translation buffer invalidation instructions TLBI within respective address translation contexts VMID, ASID, X. Translation buffer invalidation signals generated as a consequence of execution of the translation buffer invalidation instructions are broadcast to respective translation buffers and include signals which specify the address translation context of the translation buffer invalidation instruction that was executed. This address translation context specified within the translation buffer invalidation signals is used to gate whether or not those translation buffer invalidation signals when received by translation buffers which are potential targets for the invalidation are or are not flushed.
    Type: Grant
    Filed: June 1, 2018
    Date of Patent: March 28, 2023
    Assignee: Arm Limited
    Inventors: Matthew James Horsnell, Grigorios Magklis, Richard Roy Grisenthwaite
  • Patent number: 11599270
    Abstract: Aspects relate to Input/Output (IO) Memory Management Units (MMUs) that include hardware structures for implementing virtualization. Some implementations allow guests to setup and maintain device IO tables within memory regions to which those guests have been given permissions by a hypervisor. Some implementations provide hardware page table walking capability within the IOMMU, while other implementations provide static tables. Such static tables may be maintained by a hypervisor on behalf of guests. Some implementations reduce a frequency of interrupts or invocation of hypervisor by allowing transactions to be setup by guests without hypervisor involvement within their assigned device IO regions. Devices may communicate with IOMMU to setup the requested memory transaction, and completion thereof may be signaled to the guest without hypervisor involvement. Various other aspects will be evident from the disclosure.
    Type: Grant
    Filed: May 4, 2020
    Date of Patent: March 7, 2023
    Inventors: Sanjay Patel, Ranjit J Rozario
  • Patent number: 11573911
    Abstract: Apparatus comprises a multi-threaded processing element to execute processing threads as one or more process groups each of one or more processing threads, each process group having a process group identifier unique amongst the one or more process groups and being associated by capability data with a respective memory address range in a virtual memory address space; and memory address translation circuitry to translate a virtual memory address to a physical memory address by a processing thread of one of the process groups; the memory address translation circuitry being configured to associate, with a translation of a given virtual memory address to a corresponding physical memory address, permission data defining one or more process group identifiers representing respective process groups permitted to access the given virtual memory address, and to inhibit access to the given virtual memory address in dependence on the capability data associated with the process group of the processing thread requesting the
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: February 7, 2023
    Assignee: Arm Limited
    Inventor: Tamás Petz
  • Patent number: 11567661
    Abstract: A virtual memory management method applied to an intelligent processor including an operation accelerator includes: determining m storage units from a physical memory, the m storage units forming a virtual memory; dividing the m storage units into n storage groups; determining an address mapping relationship for each storage group to obtain n address mapping relationships, the n address mapping relationship being correspondence of between n virtual addresses of the virtual memory and physical addresses of the m storage units, where m and n are dynamically updated according to requirements of the operation accelerator. In the method, the number of the storage units in each storage group can be configured according to requirements of the operation accelerator, and a data storage bit width and a data storage depth of the virtual memory are dynamically updated to thereby improve data access efficiency.
    Type: Grant
    Filed: April 15, 2021
    Date of Patent: January 31, 2023
    Assignee: SIGMASTAR TECHNOLOGY LTD.
    Inventors: Wei Zhu, Chao Li, Bo Lin
  • Patent number: 11567935
    Abstract: Implementations set forth herein relate to conditionally caching responses to automated assistant queries according to certain contextual data that may be associated with each automated assistant query. Each query can be identified based on historical interactions between a user and an automated assistant, and—depending on the query, fulfillment data can be cached according to certain contextual data that influences the query response. Depending on how the contextual data changes, a cached response stored at a client device can be discarded and/or replaced with an updated cached response. For example, a query that users commonly ask prior to leaving for work can have a corresponding assistant response that depends on features of an environment of the users. This unique assistant response can be cached, before the users provide the query, to minimize latency that can occur when network or processing bandwidth is unpredictable.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: January 31, 2023
    Assignee: Google LLC
    Inventors: Benedict Liang, Bryan Christopher Horling, Lan Huo, Anarghya Mitra
  • Patent number: 11561906
    Abstract: A processing system rinses, from a cache, those cache lines that share the same memory page as a cache line identified for eviction. A cache controller of the processing system identifies a cache line as scheduled for eviction. In response, the cache controller, identifies additional “dirty victim” cache lines (cache lines that have been modified at the cache and not yet written back to memory) that are associated with the same memory page, and writes each of the identified cache lines to the same memory page. By writing each of the dirty victim cache lines associated with the memory page to memory, the processing system reduces memory overhead and improves processing efficiency.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: January 24, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: William L. Walker, William E. Jones
  • Patent number: 11550731
    Abstract: The present invention discloses an instruction processing apparatus, including: a first register adapted to store address information; a second register adapted to store address space identification information; a decoder adapted to receive and decode a translation lookaside buffer flush instruction, where the translation lookaside buffer flush instruction indicates that the first register serves as a first operand, and the second register serves as a second operand; and an execution unit coupled to the first register, the second register, and the decoder and executing the decoded translation lookaside buffer flush instruction, so as to acquire address information from the first register, to acquire address space identification information from the second register, and to broadcast the acquired address, information and address space identification information on a bus coupled to the instruction processing apparatus, so that another processing unit coupled to the bus performs purging on a translation lookaside
    Type: Grant
    Filed: September 8, 2020
    Date of Patent: January 10, 2023
    Assignee: Alibaba Group Holding Limited
    Inventor: Ren Guo
  • Patent number: 11513972
    Abstract: Aspects of managing Translation Lookaside Buffer (TLB) units are described herein. The aspects may include a memory management unit (MMU) that includes one or more TLB units and a control unit. The control unit may be configured to identify one from the one or more TLB units based on a stream identification (ID) included in a received virtual address and, further, to identify a frame number in the identified TLB unit. A physical address may be generated by the control unit based on the frame number and an offset included in the virtual address.
    Type: Grant
    Filed: August 12, 2019
    Date of Patent: November 29, 2022
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Tianshi Chen, Qi Guo, Yunji Chen
  • Patent number: 11507518
    Abstract: Methods, systems, and devices for compressed logical-to-physical mapping for sequentially stored data are described. A memory device may use a hierarchical set of logical-to-physical mapping tables for mapping logical block address generated by a host device to physical addresses of the memory device. The memory device may determine whether all of the entries of a terminal logical-to-physical mapping table are consecutive physical addresses. In response to determining that all of the entries contain consecutive physical addresses, the memory device may store a starting physical address of the consecutive physical addresses as an entry in a higher-level table along with a flag indicating that the entry points directly to data in the memory device rather than pointing to a terminal logical-to-physical mapping table. The memory device may, for subsequent reads of data stored in one or more of the consecutive physical addresses, bypass the terminal table to read the data.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: November 22, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Giuseppe Cariello, Jonathan S. Parry
  • Patent number: 11467959
    Abstract: Techniques are disclosed relating to caching for address translation. In some embodiments, address translation circuitry is configured to process requests to translate addresses in a first address space to addresses in a second address space. The translation circuitry may include cache circuitry configured to store translation information, arbitration circuitry configured to arbitrate among ready requests for access to entries of the cache, and hazard circuitry. The hazard circuitry may assign a first request to an ready status the arbitration circuitry based on detection of an absence of hazards for a first address of the first request and add a second request to a queue of requests for the arbitration circuitry based on detection of a hazard for a second address of the second request. Independent arbitration for requests without hazards may improve performance in various aspects, relative to traditional techniques.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: October 11, 2022
    Assignee: Apple Inc.
    Inventors: Winnie W. Yeung, Cheng Li
  • Patent number: 11449258
    Abstract: Apparatuses and methods for controlling word lines and sense amplifiers in a semiconductor device are described.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: September 20, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Kazuhiko Kajigaya
  • Patent number: 11436161
    Abstract: This disclosure is directed to a system for address mapping and translation protection. In one embodiment, processing circuitry may include a virtual machine manager (VMM) to control specific guest linear address (GLA) translations. Control may be implemented in a performance sensitive and secure manner, and may be capable of improving performance for critical linear address page walks over legacy operation by removing some or all of the cost of page walking extended page tables (EPTs) for critical mappings. Alone or in combination with the above, certain portions of a page table structure may be selectively made immutable by a VMM or early boot process using a sub-page policy (SPP). For example, SPP may enable non-volatile kernel and/or user space code and data virtual-to-physical memory mappings to be made immutable (e.g., non-writable) while allowing for modifications to non-protected portions of the OS paging structures and particularly the user space.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: September 6, 2022
    Assignee: Intel Corporation
    Inventors: Ravi L. Sahita, Gilbert Neiger, Vedvyas Shanbhogue, David M. Durham, Andrew V. Anderson, David A. Koufaty, Asit K. Mallick, Arumugam Thiyagarajah, Barry E. Huntley, Deepak K. Gupta, Michael Lemay, Joseph F. Cihula, Baiju V. Patel
  • Patent number: 11422946
    Abstract: Systems, apparatuses, and methods for implementing translation lookaside buffer (TLB) striping to enable efficient invalidation operations are described. TLB sizes are growing in width (more features in a given page table entry) and depth (to cover larger memory footprints). A striping scheme is proposed to enable an efficient and high performance method for performing TLB maintenance operations in the face of this growth. Accordingly, a TLB stores first attribute data in a striped manner across a plurality of arrays. The striped manner allows different entries to be searched simultaneously in response to receiving an invalidation request which identifies a particular attribute of a group to be invalidated. Upon receiving an invalidation request, the TLB generates a plurality of indices with an offset between each index and walks through the plurality of arrays by incrementing each index and simultaneously checking the first attribute data in corresponding entries.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: August 23, 2022
    Assignee: Apple Inc.
    Inventors: John D. Pape, Brian R. Mestan, Peter G. Soderquist
  • Patent number: 11422945
    Abstract: A method for managing memory addresses in a memory subsystem is described. The method includes determining that a chunk of logical addresses is sequentially written such that a set of physical addresses mapped to corresponding logical addresses in the chunk are sequential. Thereafter, the memory subsystem updates an entry in a sequential write table for the chunk to indicate that the chunk was sequentially written and a compressed logical-to-physical (L2P) table based on (1) the sequential write table and (2) a full L2P table. The full L2P table includes a set of full L2P entries and each entry corresponds to a logical address in the chunk and references a physical address in the set of physical addresses. The compressed L2P table includes an entry that references a first physical address of the first set of physical addresses that is also referenced by an entry in the L2P table.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: August 23, 2022
    Assignee: MICRON TECHNOLOGY, INC.
    Inventor: David A. Palmer
  • Patent number: 11409663
    Abstract: A computer system includes a translation lookaside buffer (TLB) and a processor. The TLB comprises a first TLB array and a second TLB array, and stores entries comprising virtual address information and corresponding real address information. The processor is configured to receive a first virtual address for translation, and to concurrently determine if the TLB stores a physical address associated with the first virtual address based on a first portion and a second portion of the first virtual address. The first portion is associated with a first page size and the second portion is associated with a second page size (different from the first page size). The first portion is used to perform lookup in either one of the first TLB array and the second TLB array and the second portion is used for performing lookup in other one of the first TLB array and the second TLB array.
    Type: Grant
    Filed: November 4, 2020
    Date of Patent: August 9, 2022
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Dwain A. Hicks