Directory Tables (e.g., Dlat, Tlb) Patents (Class 711/207)
-
Patent number: 11941252Abstract: Provided are methods, apparatuses, systems, and computer-readable storage media for reducing an open time of a solid-state drive (SSD). In an embodiment, a method includes dividing a logical-to-physical (L2P) address mapping table of the SSD into a plurality of segments. The method further includes assigning one journal buffer of a plurality of journal buffers to each segment of the plurality of segments. The method further includes recreating, during a power on sequence of the SSD, a portion of the plurality of segments by replaying a first subset of the plurality of journal buffers. The method further includes sending, to a host device, a device-ready signal upon successful recreation of the portion of the plurality of segments. The method further includes recreating, in a background mode, a remaining portion of the plurality of segments by replaying a second subset of the plurality of journal buffers.Type: GrantFiled: August 9, 2022Date of Patent: March 26, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Tushar Tukaram Patil, Anantha Sharma, Sharath Kumar Kodase, Suman Prakash Balakrishnan
-
Patent number: 11940912Abstract: A logical-to-physical (L2P) table is maintained, wherein a plurality of sections of the L2P table is cached in a volatile memory device. A total dirty count for the L2P table is maintained, wherein the total dirty count reflects a total number of updates to the L2P table. Respective section dirty counts for the plurality of sections are maintained, wherein each respective section dirty count reflects a total number of updates to a corresponding section. It is determined that the total dirty count for the L2P table satisfies a threshold criterion. In response to determining that the total dirty count for the L2P table satisfies the threshold criterion, a first section of the plurality of sections is identified based on the respective section dirty counts. The first section of the L2P table is written to a non-volatile memory device.Type: GrantFiled: March 1, 2022Date of Patent: March 26, 2024Assignee: Micron Technology, Inc.Inventors: Byron Harris, Daniel Boals, Abedon Madril
-
Patent number: 11934320Abstract: A type of translation lookaside buffer (TLB) invalidation instruction is described which specifically targets a first type of TLB which stores combined stage-1-and-2 entries which depend on both stage 1 translation data and the stage 2 translation data, and which is configured to ignore a TLB invalidation command which invalidates based on a first set of one or more invalidation conditions including an address-based invalidation condition depending on matching of intermediate address. A second type of TLB other than the first type ignores the invalidation command triggered by the first type of TLB invalidation instruction. This approach helps to limit the performance impact of stage 2 invalidations in systems supporting a combined stage-1-and-2 TLB which cannot invalidate by intermediate address.Type: GrantFiled: August 26, 2020Date of Patent: March 19, 2024Assignee: Arm LimitedInventor: Andrew Brookfield Swaine
-
Patent number: 11929927Abstract: A network interface controller can be programmed to direct write received data to a memory buffer via either a host-to-device fabric or an accelerator fabric. For packets received that are to be written to a memory buffer associated with an accelerator device, the network interface controller can determine an address translation of a destination memory address of the received packet and determine whether to use a secondary head. If a translated address is available and a secondary head is to be used, a direct memory access (DMA) engine is used to copy a portion of the received packet via the accelerator fabric to a destination memory buffer associated with the address translation. Accordingly, copying a portion of the received packet through the host-to-device fabric and to a destination memory can be avoided and utilization of the host-to-device fabric can be reduced for accelerator bound traffic.Type: GrantFiled: December 21, 2020Date of Patent: March 12, 2024Assignee: Intel CorporationInventors: Pratik M. Marolia, Rajesh M. Sankaran, Ashok Raj, Nrupal Jani, Parthasarathy Sarangam, Robert O. Sharp
-
Patent number: 11914865Abstract: A method and system are provided for limiting unnecessary data traffic on the data busses connecting the various levels of system memory. Some embodiments may include processing an invalidation command associated with a system or network operation requiring temporary storage of data in a local memory area. The invalidation command may comprise a memory location indicator capable of identifying the physical addresses of the associated data in the local memory area. Some embodiments may preclude the data associated with the system or network operation from being written to a main memory by invalidating the memory locations holding the temporary data once the system or network operation has finished utilizing the local memory area.Type: GrantFiled: April 11, 2022Date of Patent: February 27, 2024Assignee: Mellanox Technologies, Ltd.Inventors: Yamin Friedman, Idan Burstein, Hillel Chapman, Gal Yefet
-
Patent number: 11907301Abstract: A control table (22) defines information for controlling a processing component (20) to perform an operation. The table (22) comprises entries each corresponding to a variable size region defined by a first limit address and one of a second limit address and size. A binary search procedure is provided for looking up the table, comprising a number of search window narrowing steps, each narrowing a current search window of candidate entries to a narrower search window comprising fewer entries, based on a comparison of a query address against the first limit address of a selected candidate entry of the current search window. The comparison is independent of the second limit address or size of the selected candidate entry. After the search window is narrowed to a single entry, the query address is compared with the second limit address or size of that single entry.Type: GrantFiled: June 6, 2019Date of Patent: February 20, 2024Assignee: Arm LimitedInventors: Thomas Christopher Grocutt, François Christopher Jacques Botman
-
Patent number: 11886450Abstract: A statistical data processing device includes: a first statistical image generation unit for generating statistical images including a first statistical image representing a first statistical value as a corresponding pixel value, and a second statistical image representing a second statistical value as a corresponding pixel value; a mask generation unit for generating a mask image, the mask image extracting, if one of a pixel of a first statistical image and a corresponding pixel of a second statistical image does not have a pixel value indicating a statistical value, a pixel not having a pixel value indicating the statistical value or other pixel; and a second statistical image generation unit for generating a third statistical image in which a pixel value of a pixel not having a pixel value indicating the statistical value is complemented with a pixel value of the other statistical image.Type: GrantFiled: May 9, 2019Date of Patent: January 30, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Xiaojun Wu, Masaki Kitahara, Atsushi Shimizu
-
Patent number: 11853225Abstract: A method includes receiving, by a memory management unit (MMU) comprising a translation lookaside buffer (TLB) and a configuration register, a request from a processor core to directly modify an entry in the TLB. The method also includes, responsive to the configuration register having a first value, operating the MMU in a software-managed mode by modifying the entry in the TLB according to the request. The method further includes, responsive to the configuration register having a second value, operating the MMU in a hardware-managed mode by denying the request.Type: GrantFiled: October 12, 2020Date of Patent: December 26, 2023Assignee: Texas Instruments IncorporatedInventors: Timothy D. Anderson, Joseph Raymond Michael Zbiciak, Kai Chirca, Daniel Brad Wu
-
Patent number: 11847064Abstract: A method and system of translating addresses is disclosed that includes receiving an effective address for translation, providing a processor and a translation buffer where the translation buffer has a plurality of entries, wherein each entry contains a mapping of an effective address directly to a corresponding real address, and information on a corresponding intermediate virtual address. The method and system further include determining whether the translation buffer has an entry matching the effective address, and in response to the translation buffer having an entry with a matching effective address, providing the real address translation from the entry having the matching effective address.Type: GrantFiled: December 7, 2018Date of Patent: December 19, 2023Assignee: International Business Machines CorporationInventor: David Campbell
-
Patent number: 11838403Abstract: The present techniques may provide improved processing and functionality of performance of the 128-bit AES Algorithm, which may provide improved power consumption. For example, in an embodiment, an encryption and decryption apparatus may comprise memory storing a current state matrix of an encryption or decryption process and a plurality of multiplexers configured to receive from the memory current elements of the state matrix stored in the memory, perform a cyclic shift on the received elements of the state matrix, and transmit the shifted elements to the memory for storage as a new state matrix.Type: GrantFiled: April 10, 2020Date of Patent: December 5, 2023Assignee: BOARD OF REGENTS, THE UNIVERSITY OF TEXAS SYSTEMInventors: Alekhya Muthineni, Eugene John
-
Patent number: 11775422Abstract: Methods, systems, and devices for logic remapping techniques are described. A memory system may receive a write command to store information at a first logical address of the memory system. The memory system may generate a first entry of a logical-to-physical mapping that maps the first logical address with a first physical address that stores the information. The memory system may perform a defragmentation operation or other remapping operation. In such a defragmentation operation, the memory system may remap the first logical address to a second logical address, such that the second logical address is mapped to the first physical address. The memory system may generate a second entry of a logical-to-logical mapping that maps the first logical address with the second logical address.Type: GrantFiled: August 11, 2021Date of Patent: October 3, 2023Assignee: Micron Technology, Inc.Inventors: Jonathan S. Parry, David Aaron Palmer, Giuseppe Cariello
-
Patent number: 11757677Abstract: A binding and configuration method for a bus adapter and a channel, a mapping manager, and a connection system are provided. The binding and configuration method for the bus adapter and the channel includes: configuring a mapping table of a mapping manager; associating a logical channel with a corresponding hardware channel based on the mapping table; and connecting the logical channel to the corresponding hardware channel for data communication. A common architecture for the application program to access bus adapter resources is realized. The application programs using this architecture can arbitrarily configure the bus adapter model and the hardware channel that need to be connected, and the mapping relationship takes effect immediately after each configuration change without modifying the user's software, thus improving the efficiency of the application program development and reducing the possibility of errors.Type: GrantFiled: August 7, 2022Date of Patent: September 12, 2023Assignee: Shanghai TOSUN Technology Ltd.Inventors: Chu Liu, Yueyin Xie, Mang Mo
-
Patent number: 11733904Abstract: Embodiments of the invention provide systems and methods for managing processing, memory, storage, network, and cloud computing to significantly improve the efficiency and performance of processing nodes. More specifically, embodiments of the present invention are directed to a hardware-based processing node of an object memory fabric.Type: GrantFiled: January 24, 2022Date of Patent: August 22, 2023Assignee: Ultrata, LLCInventors: Steven J. Frank, Larry Reback
-
Patent number: 11709782Abstract: Circuitry comprises a translation lookaside buffer to store memory address translations, each memory address translation being between an input memory address range defining a contiguous range of one or more input memory addresses in an input memory address space and a translated output memory address range defining a contiguous range of one or more output memory addresses in an output memory address space; in which the translation lookaside buffer is configured selectively to store the memory address translations as a cluster of memory address translations, a cluster defining memory address translations in respect of a contiguous set of input memory address ranges by encoding one or more memory address offsets relative to a respective base memory address; memory management circuitry to retrieve data representing memory address translations from a memory, for storage by the translation lookaside buffer, when a required memory address translation is not stored by the translation lookaside buffer; detector circType: GrantFiled: October 28, 2021Date of Patent: July 25, 2023Assignee: Arm LimitedInventors: Paolo Monti, Abdel Hadi Moustafa, Albin Pierrick Tonnerre, Vincenzo Consales, Abhishek Raja
-
Patent number: 11693790Abstract: Methods, apparatus, systems and articles of manufacture to facilitate write miss caching in cache system are disclosed. An example apparatus includes a first cache storage; a second cache storage, wherein the second cache storage includes a first portion operable to store a first set of data evicted from the first cache storage and a second portion; a cache controller coupled to the first cache storage and the second cache storage and operable to: receive a write operation; determine that the write operation produces a miss in the first cache storage; and in response to the miss in the first cache storage, provide write miss information associated with the write operation to the second cache storage for storing in the second portion.Type: GrantFiled: May 22, 2020Date of Patent: July 4, 2023Assignee: Texas Instmments IncorporatedInventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
-
Patent number: 11656999Abstract: An electronic device may include a processor, a first volatile memory, and a storage including a nonvolatile memory and a second volatile memory. The processor may be configured to: identify information of a specific file and a kind of a request for data included in the specific file in response to a creation of the request for the data, set a flag in the request based on the identified information of the specific file, identify whether mapping information of a specific region including a logical address of the data among mapping information in which logical addresses and physical addresses for the nonvolatile memory are mapped onto each other is stored in the first volatile memory, determine whether to manage the mapping information of the specific region using the first volatile memory, and determine whether to update the mapping information of the specific region in the first volatile memory.Type: GrantFiled: June 9, 2020Date of Patent: May 23, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Manjong Lee, Hyeongjun Kim, Changheun Lee, Jintae Jang
-
Patent number: 11625323Abstract: Methods, systems, and devices for session-based memory operation are described. A memory system may determine that a logical address targeted by a read command is associated with a session table. The memory system may write the session table to a cache based on the logical address being associated with the session table. After writing the session table to the cache, the memory system may use the session table to determine one or more logical-to-physical (L2P) tables and write the one or more L2P tables to the cache. The memory system may use the L2L tables to perform address translation for logical addresses.Type: GrantFiled: December 7, 2020Date of Patent: April 11, 2023Assignee: Micron Technology, Inc.Inventors: Sharath Chandra Ambula, Sushil Kumar, David Aaron Palmer, Venkata Kiran Kumar Matturi, Sri Ramya Pinisetty
-
Patent number: 11620220Abstract: A cache memory system including a primary cache and an overflow cache that are searched together using a search address. The overflow cache operates as an eviction array for the primary cache. The primary cache is addressed using bits of the search address, and the overflow cache is addressed by a hash index generated by a hash function applied to bits of the search address. The hash function operates to distribute victims evicted from the primary cache to different sets of the overflow cache to improve overall cache utilization. A hash generator may be included to perform the hash function. A hash table may be included to store hash indexes of valid entries in the primary cache. The cache memory system may be used to implement a translation lookaside buffer for a microprocessor.Type: GrantFiled: December 12, 2014Date of Patent: April 4, 2023Assignee: VIA ALLIANCE SEMICONDUCTOR CO., LTD.Inventors: Colin Eddy, Rodney E. Hooker
-
Patent number: 11615032Abstract: A data processing system (2) including one or more transaction buffers (16, 18, 20) storing address translation data executes translation buffer invalidation instructions TLBI within respective address translation contexts VMID, ASID, X. Translation buffer invalidation signals generated as a consequence of execution of the translation buffer invalidation instructions are broadcast to respective translation buffers and include signals which specify the address translation context of the translation buffer invalidation instruction that was executed. This address translation context specified within the translation buffer invalidation signals is used to gate whether or not those translation buffer invalidation signals when received by translation buffers which are potential targets for the invalidation are or are not flushed.Type: GrantFiled: June 1, 2018Date of Patent: March 28, 2023Assignee: Arm LimitedInventors: Matthew James Horsnell, Grigorios Magklis, Richard Roy Grisenthwaite
-
Patent number: 11599270Abstract: Aspects relate to Input/Output (IO) Memory Management Units (MMUs) that include hardware structures for implementing virtualization. Some implementations allow guests to setup and maintain device IO tables within memory regions to which those guests have been given permissions by a hypervisor. Some implementations provide hardware page table walking capability within the IOMMU, while other implementations provide static tables. Such static tables may be maintained by a hypervisor on behalf of guests. Some implementations reduce a frequency of interrupts or invocation of hypervisor by allowing transactions to be setup by guests without hypervisor involvement within their assigned device IO regions. Devices may communicate with IOMMU to setup the requested memory transaction, and completion thereof may be signaled to the guest without hypervisor involvement. Various other aspects will be evident from the disclosure.Type: GrantFiled: May 4, 2020Date of Patent: March 7, 2023Inventors: Sanjay Patel, Ranjit J Rozario
-
Patent number: 11573911Abstract: Apparatus comprises a multi-threaded processing element to execute processing threads as one or more process groups each of one or more processing threads, each process group having a process group identifier unique amongst the one or more process groups and being associated by capability data with a respective memory address range in a virtual memory address space; and memory address translation circuitry to translate a virtual memory address to a physical memory address by a processing thread of one of the process groups; the memory address translation circuitry being configured to associate, with a translation of a given virtual memory address to a corresponding physical memory address, permission data defining one or more process group identifiers representing respective process groups permitted to access the given virtual memory address, and to inhibit access to the given virtual memory address in dependence on the capability data associated with the process group of the processing thread requesting theType: GrantFiled: August 23, 2019Date of Patent: February 7, 2023Assignee: Arm LimitedInventor: Tamás Petz
-
Patent number: 11567935Abstract: Implementations set forth herein relate to conditionally caching responses to automated assistant queries according to certain contextual data that may be associated with each automated assistant query. Each query can be identified based on historical interactions between a user and an automated assistant, and—depending on the query, fulfillment data can be cached according to certain contextual data that influences the query response. Depending on how the contextual data changes, a cached response stored at a client device can be discarded and/or replaced with an updated cached response. For example, a query that users commonly ask prior to leaving for work can have a corresponding assistant response that depends on features of an environment of the users. This unique assistant response can be cached, before the users provide the query, to minimize latency that can occur when network or processing bandwidth is unpredictable.Type: GrantFiled: March 30, 2021Date of Patent: January 31, 2023Assignee: Google LLCInventors: Benedict Liang, Bryan Christopher Horling, Lan Huo, Anarghya Mitra
-
Patent number: 11567661Abstract: A virtual memory management method applied to an intelligent processor including an operation accelerator includes: determining m storage units from a physical memory, the m storage units forming a virtual memory; dividing the m storage units into n storage groups; determining an address mapping relationship for each storage group to obtain n address mapping relationships, the n address mapping relationship being correspondence of between n virtual addresses of the virtual memory and physical addresses of the m storage units, where m and n are dynamically updated according to requirements of the operation accelerator. In the method, the number of the storage units in each storage group can be configured according to requirements of the operation accelerator, and a data storage bit width and a data storage depth of the virtual memory are dynamically updated to thereby improve data access efficiency.Type: GrantFiled: April 15, 2021Date of Patent: January 31, 2023Assignee: SIGMASTAR TECHNOLOGY LTD.Inventors: Wei Zhu, Chao Li, Bo Lin
-
Patent number: 11561906Abstract: A processing system rinses, from a cache, those cache lines that share the same memory page as a cache line identified for eviction. A cache controller of the processing system identifies a cache line as scheduled for eviction. In response, the cache controller, identifies additional “dirty victim” cache lines (cache lines that have been modified at the cache and not yet written back to memory) that are associated with the same memory page, and writes each of the identified cache lines to the same memory page. By writing each of the dirty victim cache lines associated with the memory page to memory, the processing system reduces memory overhead and improves processing efficiency.Type: GrantFiled: December 12, 2017Date of Patent: January 24, 2023Assignee: Advanced Micro Devices, Inc.Inventors: William L. Walker, William E. Jones
-
Patent number: 11550731Abstract: The present invention discloses an instruction processing apparatus, including: a first register adapted to store address information; a second register adapted to store address space identification information; a decoder adapted to receive and decode a translation lookaside buffer flush instruction, where the translation lookaside buffer flush instruction indicates that the first register serves as a first operand, and the second register serves as a second operand; and an execution unit coupled to the first register, the second register, and the decoder and executing the decoded translation lookaside buffer flush instruction, so as to acquire address information from the first register, to acquire address space identification information from the second register, and to broadcast the acquired address, information and address space identification information on a bus coupled to the instruction processing apparatus, so that another processing unit coupled to the bus performs purging on a translation lookasideType: GrantFiled: September 8, 2020Date of Patent: January 10, 2023Assignee: Alibaba Group Holding LimitedInventor: Ren Guo
-
Patent number: 11513972Abstract: Aspects of managing Translation Lookaside Buffer (TLB) units are described herein. The aspects may include a memory management unit (MMU) that includes one or more TLB units and a control unit. The control unit may be configured to identify one from the one or more TLB units based on a stream identification (ID) included in a received virtual address and, further, to identify a frame number in the identified TLB unit. A physical address may be generated by the control unit based on the frame number and an offset included in the virtual address.Type: GrantFiled: August 12, 2019Date of Patent: November 29, 2022Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Tianshi Chen, Qi Guo, Yunji Chen
-
Patent number: 11507518Abstract: Methods, systems, and devices for compressed logical-to-physical mapping for sequentially stored data are described. A memory device may use a hierarchical set of logical-to-physical mapping tables for mapping logical block address generated by a host device to physical addresses of the memory device. The memory device may determine whether all of the entries of a terminal logical-to-physical mapping table are consecutive physical addresses. In response to determining that all of the entries contain consecutive physical addresses, the memory device may store a starting physical address of the consecutive physical addresses as an entry in a higher-level table along with a flag indicating that the entry points directly to data in the memory device rather than pointing to a terminal logical-to-physical mapping table. The memory device may, for subsequent reads of data stored in one or more of the consecutive physical addresses, bypass the terminal table to read the data.Type: GrantFiled: May 8, 2020Date of Patent: November 22, 2022Assignee: Micron Technology, Inc.Inventors: Giuseppe Cariello, Jonathan S. Parry
-
Patent number: 11467959Abstract: Techniques are disclosed relating to caching for address translation. In some embodiments, address translation circuitry is configured to process requests to translate addresses in a first address space to addresses in a second address space. The translation circuitry may include cache circuitry configured to store translation information, arbitration circuitry configured to arbitrate among ready requests for access to entries of the cache, and hazard circuitry. The hazard circuitry may assign a first request to an ready status the arbitration circuitry based on detection of an absence of hazards for a first address of the first request and add a second request to a queue of requests for the arbitration circuitry based on detection of a hazard for a second address of the second request. Independent arbitration for requests without hazards may improve performance in various aspects, relative to traditional techniques.Type: GrantFiled: May 19, 2021Date of Patent: October 11, 2022Assignee: Apple Inc.Inventors: Winnie W. Yeung, Cheng Li
-
Patent number: 11449258Abstract: Apparatuses and methods for controlling word lines and sense amplifiers in a semiconductor device are described.Type: GrantFiled: August 6, 2020Date of Patent: September 20, 2022Assignee: Micron Technology, Inc.Inventor: Kazuhiko Kajigaya
-
Patent number: 11436161Abstract: This disclosure is directed to a system for address mapping and translation protection. In one embodiment, processing circuitry may include a virtual machine manager (VMM) to control specific guest linear address (GLA) translations. Control may be implemented in a performance sensitive and secure manner, and may be capable of improving performance for critical linear address page walks over legacy operation by removing some or all of the cost of page walking extended page tables (EPTs) for critical mappings. Alone or in combination with the above, certain portions of a page table structure may be selectively made immutable by a VMM or early boot process using a sub-page policy (SPP). For example, SPP may enable non-volatile kernel and/or user space code and data virtual-to-physical memory mappings to be made immutable (e.g., non-writable) while allowing for modifications to non-protected portions of the OS paging structures and particularly the user space.Type: GrantFiled: November 18, 2019Date of Patent: September 6, 2022Assignee: Intel CorporationInventors: Ravi L. Sahita, Gilbert Neiger, Vedvyas Shanbhogue, David M. Durham, Andrew V. Anderson, David A. Koufaty, Asit K. Mallick, Arumugam Thiyagarajah, Barry E. Huntley, Deepak K. Gupta, Michael Lemay, Joseph F. Cihula, Baiju V. Patel
-
Patent number: 11422946Abstract: Systems, apparatuses, and methods for implementing translation lookaside buffer (TLB) striping to enable efficient invalidation operations are described. TLB sizes are growing in width (more features in a given page table entry) and depth (to cover larger memory footprints). A striping scheme is proposed to enable an efficient and high performance method for performing TLB maintenance operations in the face of this growth. Accordingly, a TLB stores first attribute data in a striped manner across a plurality of arrays. The striped manner allows different entries to be searched simultaneously in response to receiving an invalidation request which identifies a particular attribute of a group to be invalidated. Upon receiving an invalidation request, the TLB generates a plurality of indices with an offset between each index and walks through the plurality of arrays by incrementing each index and simultaneously checking the first attribute data in corresponding entries.Type: GrantFiled: August 31, 2020Date of Patent: August 23, 2022Assignee: Apple Inc.Inventors: John D. Pape, Brian R. Mestan, Peter G. Soderquist
-
Patent number: 11422945Abstract: A method for managing memory addresses in a memory subsystem is described. The method includes determining that a chunk of logical addresses is sequentially written such that a set of physical addresses mapped to corresponding logical addresses in the chunk are sequential. Thereafter, the memory subsystem updates an entry in a sequential write table for the chunk to indicate that the chunk was sequentially written and a compressed logical-to-physical (L2P) table based on (1) the sequential write table and (2) a full L2P table. The full L2P table includes a set of full L2P entries and each entry corresponds to a logical address in the chunk and references a physical address in the set of physical addresses. The compressed L2P table includes an entry that references a first physical address of the first set of physical addresses that is also referenced by an entry in the L2P table.Type: GrantFiled: March 26, 2020Date of Patent: August 23, 2022Assignee: MICRON TECHNOLOGY, INC.Inventor: David A. Palmer
-
Patent number: 11409663Abstract: A computer system includes a translation lookaside buffer (TLB) and a processor. The TLB comprises a first TLB array and a second TLB array, and stores entries comprising virtual address information and corresponding real address information. The processor is configured to receive a first virtual address for translation, and to concurrently determine if the TLB stores a physical address associated with the first virtual address based on a first portion and a second portion of the first virtual address. The first portion is associated with a first page size and the second portion is associated with a second page size (different from the first page size). The first portion is used to perform lookup in either one of the first TLB array and the second TLB array and the second portion is used for performing lookup in other one of the first TLB array and the second TLB array.Type: GrantFiled: November 4, 2020Date of Patent: August 9, 2022Assignee: International Business Machines CorporationInventors: David Campbell, Dwain A. Hicks
-
Patent number: 11403211Abstract: A method of operation of a storage system includes: establishing a virtual storage device 1 including allocating portions of a storage media 1, a storage media 2, a storage media N, or a combination thereof including writing data blocks to the virtual storage device 1; determining a pinning status for the data blocks; pinning the data blocks to a logical block address (LBA) range until the pinning status indicates an unpinning of the data blocks; and relocating the data blocks to the storage media 1, the storage media 2, the storage media N, or the combination thereof within the virtual storage device 1 when the pinning status indicates unpinned.Type: GrantFiled: January 31, 2020Date of Patent: August 2, 2022Assignee: Enmotus, Inc.Inventors: Andrew Mills, Marshall Lee
-
Patent number: 11397689Abstract: A memory manager includes an internal memory and a hash function circuit. The internal memory includes a V2H (virtual address to hash function) table and an exception mapping table. The V2H table stores at least one virtual address group and a type information on a hash function mapped to the virtual address group. The exception mapping table stores at least one exception virtual address not translated into a physical address by the hash function in the virtual address group and a physical address mapped to the exception virtual address. The has function circuit checks, when a virtual address is provided from a host, type information on a hash function mapped to a virtual address group including the virtual address, by referring to the V2H table included in the internal memory. The has function translates the virtual address into a physical address by using the hash function corresponding to the type information.Type: GrantFiled: October 8, 2019Date of Patent: July 26, 2022Assignee: SK hynix Inc.Inventor: Kyu Hyun Choi
-
Patent number: 11386016Abstract: A memory management unit (MMU) including a unified translation lookaside buffer (TLB) supporting a plurality of page sizes is disclosed. In one aspect, the MMU is further configured to store and dynamically update page size residency metadata associated with each of the plurality of page sizes. The page size residency metadata may include most recently used (MRU) page size data and/or a counter for each page size indicating how many pages of that page size are resident in the unified TLB. The unified TLB is configured to determine an order in which to perform a TLB lookup for at least a subset of page sizes of the plurality of page sizes based on the page size residency metadata.Type: GrantFiled: December 20, 2019Date of Patent: July 12, 2022Assignee: Ampere Computing LLCInventors: George Van Horn Leming, III, John Gregory Favor, Stephan Jean Jourdan, Jonathan Christopher Perry, Bret Leslie Toll
-
Patent number: 11379152Abstract: An apparatus comprises transaction handling circuitry to issue memory access transactions, each memory access transaction specifying an epoch identifier indicative of a current epoch in which the memory access transaction is issued; transaction tracking circuitry to track, for each of at least two epochs, a number of outstanding memory access transactions issued in that epoch; barrier termination circuitry to signal completion of a barrier termination command when the transaction tracking circuitry indicates that there are no outstanding memory access transactions remaining which were issued in one or more epochs preceding a barrier point; and epoch changing circuitry to change the current epoch to a next epoch, in response to a barrier point signal representing said barrier point. This helps to reduce the circuit area overhead for tracking completion of memory access transactions preceding a barrier point.Type: GrantFiled: June 11, 2020Date of Patent: July 5, 2022Assignee: Arm LimitedInventors: Andrew Brookfield Swaine, Peter Andrew Riocreux
-
Patent number: 11366611Abstract: A data processing system may include: a host suitable for including a first physical address corresponding to a first logical address in a first command, wherein the first physical address and the first logical address are associated with data, and sending the first command with the first physical address; and a memory system suitable for performing an operation corresponding to the first command by using the first physical address received from the host, and sending a result of the performed command operation to the host as a response, the host may check a time difference between a first time point that the first command is sent and a second time point that the response corresponding to the first command is received, and may determine whether to use the first physical address in a next command, based on a result of the time difference check.Type: GrantFiled: January 3, 2020Date of Patent: June 21, 2022Assignee: SK hynix Inc.Inventor: Eu-Joon Byun
-
Patent number: 11354251Abstract: A method of offloading a computing kernel from a host central processing unit (CPU) to a co-processor includes obtaining, by an application running on the host CPU, a virtual address of a packet in a user level queue of a general packet processing unit (GPPU) and initializing, by the application, the packet referenced by the virtual address using an application programming interface of a user level device driver (ULDD). The packet includes a plurality of handles corresponding to the computing kernel. The method further includes finalizing, by the ULDD, the packet by including a list of bootstrap translation addresses comprising a physical address and a virtual address for each of the plurality of handles and output by a kernel level device driver (KLDD) of an operating system running on the host CPU, and accessing, by the application using the virtual address, results obtained from the co-processor processing the computing kernel.Type: GrantFiled: April 1, 2021Date of Patent: June 7, 2022Assignees: STMICROELECTRONICS (GRENOBLE 2) SAS, TECHNOLOGICAL EDUCATIONAL INSTITUTE OF CRETEInventors: Antonio-Marcello Coppola, Georgios Kornaros, Miltos Grammatikakis
-
Patent number: 11354128Abstract: In one embodiment, software executing on a data processing system that is capable of performing dynamic operational mode transitions can realize performance improvements by predicting transitions between modes and/or predicting aspects of a new operational mode. Such prediction can allow the processor to begin an early transition into the target mode. The mode transition prediction principles can be applied for various processor mode transitions including 64-bit to 32-bit mode transitions, interrupts, exceptions, traps, virtualization mode transfers, system management mode transfers, and/or secure execution mode transfers.Type: GrantFiled: March 4, 2015Date of Patent: June 7, 2022Assignee: Intel CorporationInventors: Jason W. Brandt, Vedvyas Shanbhogue, Kameswar Subramaniam
-
Patent number: 11321241Abstract: Techniques are disclosed for processing address translations. The techniques include detecting a first miss for a first address translation request for a first address translation in a first translation lookaside buffer, in response to the first miss, fetching the first address translation into the first translation lookaside buffer and evicting a second address translation from the translation lookaside buffer into an instruction cache or local data share memory, detecting a second miss for a second address translation request referencing the second address translation, in the first translation lookaside buffer, and in response to the second miss, fetching the second address translation from the instruction cache or the local data share memory.Type: GrantFiled: August 31, 2020Date of Patent: May 3, 2022Assignee: Advanced Micro Devices, Inc.Inventors: Jagadish B. Kotra, Michael W. LeBeane
-
Patent number: 11314445Abstract: Aspects of a storage device are provided which allow for identification of control page patterns from previous read commands and prediction of control pages to load in advance for subsequent read commands. The storage device includes a memory configured to store data and a plurality of control pages. Each of the control pages includes a plurality of logical addresses associated with the data. A controller is configured to receive from a host device a plurality of read commands associated with a sequence of the control pages. The controller is further configured to identify and store a control page pattern based on the sequence of control pages and to predict one or more of the control pages from one or more of the other control pages in the sequence in a subsequent plurality of read commands.Type: GrantFiled: November 19, 2019Date of Patent: April 26, 2022Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.Inventors: Dinesh Kumar Agarwal, Hitesh Golechchha, Sourabh Sankule
-
Patent number: 11301392Abstract: A method and an information handling system having a plurality of processors connected by a cross-processor network, where each of the plurality of processors preferably has a filter construct having an outgoing filter list that identifies logical partition identifications (LPIDs) that are exclusively assigned to that processor and/or an incoming filter list that identifies LPIDs on that processor and at least one additional processor in the system. In operation, if the LPID of the outgoing translation invalidation instruction is on the outgoing filter list, the address translation invalidation instruction is acknowledged on behalf of the system. If the LPID of the incoming invalidation instruction does not match any LPID on the incoming filter list, then the translation invalidation instruction is acknowledged, and if the LPID of the incoming invalidation instruction matches any LPID on the incoming filter list, then the invalidation instruction is sent into the respective processor.Type: GrantFiled: October 6, 2020Date of Patent: April 12, 2022Assignee: International Business Machines CorporationInventors: Debapriya Chatterjee, Bryant Cockcroft, Larry Leitner, John A. Schumann, Karen Yokum
-
Patent number: 11288207Abstract: Apparatus comprises address translation circuitry configured to access translation data defining a set of memory address translations; transaction handling circuitry to receive translation transactions and to receive invalidation transactions, each translation transaction defining one or more input memory addresses in an input memory address space to be translated to respective output memory addresses in an output memory address space, in which the transaction handling circuitry is configured to control the address translation circuitry to provide the output memory address as a translation response; in which each invalidation transaction defines at least a partial invalidation of the translation data; transaction tracking circuitry to associate an invalidation epoch, of a set of at least two invalidation epochs, with each translation transaction and with each invalidation transaction; and invalidation circuitry to store data defining a given invalidation transaction and, for translation transactions having thType: GrantFiled: March 30, 2020Date of Patent: March 29, 2022Assignee: Arm LimitedInventor: Peter Andrew Riocreux
-
Patent number: 11281795Abstract: A system includes a random number generator generating a random number in response to an event. Control logic generates hierarchical part alignment selectors from the random number. For each secure data block to be stored in volatile storage, a physical address of a first logical address for that secure data block is set based upon the hierarchical part alignment selectors. For each data word within that secure data block, a physical address of a first logical address for that data word is set based upon the hierarchical part alignment selectors. For each data byte within that data word, a physical address of a first logical address for that data byte is set based upon the hierarchical part alignment selectors. A physical address of a logical address for a first data bit within that data byte is set based upon the hierarchical part alignment selectors.Type: GrantFiled: December 24, 2019Date of Patent: March 22, 2022Assignee: STMicroelectronics International N.V.Inventor: Dhulipalla Phaneendra Kumar
-
Patent number: 11281591Abstract: The present disclosure includes a method for implementing a virtual address space. The method includes providing an embedded system having a physical memory defining a physical address space. The method includes providing a program, executable by the embedded system, having a program range having an access characteristic and a memory layout for the virtual address space. The program range is assigned to a virtual address in the virtual address space and has a program size. The method further includes creating the virtual address space on the embedded system, which comprises segments having an identical segment size and a separate virtual start address. The method also includes creating a conversion table on the embedded system for converting a virtual address of the program range into a physical address in the physical memory. Finally, the method converts a memory access to the virtual address into the physical address using the conversion table.Type: GrantFiled: December 17, 2019Date of Patent: March 22, 2022Assignee: Endress+Hauser Conducta GmbH+Co. KGInventor: Stefan Kempf
-
Patent number: 11269780Abstract: A computer system includes physical memory devices of different types that store randomly-accessible data in a main memory of the computer system. In one approach, data is stored in memory at one or more logical addresses allocated to an application by an operating system. The data is physically stored in a first memory device of a first memory type (e.g., NVRAM). The operating system determines an access pattern for the stored data. In response to determining the access pattern, the data is moved from the first memory device to a second memory device of a different memory type (e.g., DRAM).Type: GrantFiled: September 17, 2019Date of Patent: March 8, 2022Assignee: Micron Technology, Inc.Inventors: Kenneth Marion Curewitz, Sean S. Eilert, Hongyu Wang, Samuel E. Bradshaw, Shivasankar Gunasekaran, Justin M. Eno, Shivam Swami
-
Patent number: 11263149Abstract: The present disclosure describes aspects of cache management of logical-physical translation metadata. In some aspects, a cache (260) for logical-physical translation entries of a storage media system (114) is divided into a plurality of segments (264). An indexer (364) is configured to efficiently balance a distribution of the logical-physical translation entries (252) between the segments (252). A search engine (362) associated with the cache is configured to search respective cache segments (264) and a cache manager (160) may leverage masked search functionality of the search engine (362) to reduce the overhead of cache flush operations.Type: GrantFiled: February 26, 2020Date of Patent: March 1, 2022Assignee: Marvell Asia PTE, Ltd.Inventors: Yu Zeng, Shenghao Gao
-
Patent number: 11262946Abstract: Various embodiments described herein provide for selectively sending a cache-based read command, such as a speculative read (SREAD) command in accordance with a Non-Volatile Dual In-Line Memory Module-P (NVDIMM-P) memory protocol, to a memory sub-system.Type: GrantFiled: November 25, 2019Date of Patent: March 1, 2022Assignee: Micron Technology, Inc.Inventors: Dhawal Bavishi, Patrick A. La Fratta
-
Patent number: 11256615Abstract: A memory system may include a memory device and a controller including a memory, suitable for generating map data for mapping between a physical address corresponding to data within the memory device in response to a command and a logical address received from a host, wherein the controller selects a memory map segment among a plurality of memory map segments, when a read count corresponding to the selected memory map segment is greater than or equal to a first threshold, calculates a map miss ratio of the memory using a total read count and a map miss count, and transmits the selected memory map segment to the host when the map miss ratio is greater than or equal to a second threshold.Type: GrantFiled: November 12, 2019Date of Patent: February 22, 2022Assignee: SK hynix Inc.Inventor: Eu-Joon Byun