Patents Examined by Hua J Song
-
Patent number: 11520527Abstract: An apparatus comprises a processing device. The processing device is configured to persistently store metadata pages on a plurality of storage devices. The metadata pages are organized into buckets. The processing device is configured to access a given metadata page based at least in part on a bucket identifier where the given metadata page corresponds to a given logical volume. The bucket identifier comprises a first portion comprising an indication of a given bucket range that corresponds to the given logical volume and a second portion comprising an indication of an offset into the given bucket range that corresponds to a grouping of buckets that correspond to the given logical volume. The grouping of buckets corresponds to the given logical volume. The bucket identifier further comprises a third portion comprising an indication of an offset into the grouping of buckets that corresponds to the bucket comprising the given metadata page.Type: GrantFiled: June 11, 2021Date of Patent: December 6, 2022Assignee: EMC IP Holding Company LLCInventors: Amitai Alkalay, Vladimir Shveidel, Lior Kamran
-
Patent number: 11513967Abstract: Provided computer-implemented methods for prioritizing cache objects for deletion may include (1) tracking, at a computing device, a respective time an externally-accessed object spends in an external cache, (2) queuing, when the externally-accessed object is purged from the external cache, the externally-accessed object in a first queue, (3) queuing, when an internally-accessed object is released, the internally-accessed object in a second queue, (4) prioritizing objects within the first queue, based on a cache-defined internal age factor and on respective times the objects spend in the external cache and respective times the objects spend in an internal cache, (5) prioritizing objects within the second queue based on respective times the objects spend in the internal cache, (6) selecting an oldest object having a longest time in any of the first queue and the second queue, and (7) deleting the oldest object. Various other methods, systems, and computer-readable media are disclosed.Type: GrantFiled: December 30, 2020Date of Patent: November 29, 2022Assignee: VERITAS TECHNOLOGIES LLCInventors: Jitendra Patidar, Anindya Banerjee
-
Patent number: 11507317Abstract: A program operation is executed on a memory sub-system. In response to receiving a request to execute a read operation, executing a first program suspend operation to suspend the program operation. In response to a completion of the read operation, a program resume operation is executed to resume execution of the program operation. A delay period is established following execution of the program resume operation during which execution of the program operation is completed. A second program suspend operation is executed following the delay period.Type: GrantFiled: November 20, 2020Date of Patent: November 22, 2022Assignee: MICRON TECHNOLOGY, INC.Inventors: Jiangang Wu, Sampath K. Ratnam, Yang Zhang, Guang Chang Ye, Kishore Kumar Muchherla, Hong Lu, Karl D. Schuh, Vamsi Pavan Rayaprolu
-
Patent number: 11500552Abstract: A method for managing processing power in a storage system is provided. The method includes providing a plurality of blades, each of a first subset having a storage node and storage memory, and each of a second, differing subset having a compute-only node. The method includes distributing authorities across the plurality of blades, to a plurality of nodes including at least one compute-only node, wherein each authority has ownership of a range of user data.Type: GrantFiled: November 12, 2020Date of Patent: November 15, 2022Assignee: Pure Storage, Inc.Inventors: John Martin Hayes, Robert Lee, John Colgrove, John D. Davis
-
Patent number: 11500775Abstract: A memory system stores user data including file content in clusters of memory space, folder entries, metadata, and a file allocation table FAT including FAT entries. The system comprises a cache memory, an addressable memory including memory space, and control logic coupled to the addressable memory and the cache memory. The control logic is configured to store user data in a current cluster at a current cluster offset including file content, and corresponding metadata including the current cluster offset, and a linked cluster offset of a linked cluster linking to the current cluster in the addressable memory, and to cache a FAT entry pointing to the current cluster in the cache memory.Type: GrantFiled: February 24, 2021Date of Patent: November 15, 2022Assignee: MACRONIX INTERNATIONAL CO., LTD.Inventor: Chun-Lien Su
-
Patent number: 11494307Abstract: A computing system includes a host and a storage device. The host includes a host memory, and the storage device includes a processor, a semiconductor memory device and a device memory which caches mapping information of the semiconductor memory device. In operation, the processor transmits to the host read data and mapping table entry information of a logical address region corresponding to the read data in response to a read request. The mapping table entry information is transmitted to the host based on features of the logical address region. Additionally, the host may transmit a read buffer request corresponding to the mapping table entry information to the storage device, and the storage device may transmit mapping information corresponding to the read buffer request to the host, which then stores the mapping information in the host memory.Type: GrantFiled: July 27, 2020Date of Patent: November 8, 2022Assignee: SK hynix Inc.Inventor: Ji Hoon Seok
-
Patent number: 11494300Abstract: Methods and apparatus provide virtual to physical address translations and a hardware page table walker with region based page table prefetch operation that produces virtual memory region tracking information that includes at least: data representing a virtual base address of a virtual memory region and a physical address of a first page table entry (PTE) corresponding to a virtual page within the virtual memory region. The hardware page table walker, in response to the TLB miss indication, prefetches a physical address of a second page table entry, that provides a final physical address for the missed TLB entry, using the virtual memory region tracking information. In some implementations, the prefetching of the physical PTE address is done in parallel with earlier levels of a page walk operations.Type: GrantFiled: September 26, 2020Date of Patent: November 8, 2022Assignee: ADVANCED MICRO DEVICES, INC.Inventor: Gabriel H. Loh
-
Patent number: 11474949Abstract: A memory management system includes a physical memory associated with a computing device and a memory manager. The memory manager is configured to manage a shared memory cache as part of a compression of the physical memory using a cache compression algorithm, wherein a compression block size for the compression is a single cache line size. The physical memory includes a sector translation table (STT) region and a sector memory region. The memory manager uses a memory descriptor defined by an STT entry having a cache line map and a plurality of sector pointers to load cache from the physical memory to a level 3 Cache. The cache line map contains cache line metadata including a size of each cache line, a location of the cache line in one of the sectors pointed to by the STT entry, and a plurality of flags.Type: GrantFiled: June 24, 2020Date of Patent: October 18, 2022Assignee: Microsoft Technology Licensing, LLCInventor: Badriddine Khessib
-
Patent number: 11474951Abstract: The present invention discloses a memory management unit, an address translation method, and a processor. The memory management unit includes: a translation lookaside buffer adapted to store a plurality of translation entries, where each translation entry includes a size flag bit, a virtual address tag, and a physical address tag, the virtual address tag represents a virtual page, the physical address tag represents a physical page corresponding to the virtual pane, and the size flag bit represents a page size of the virtual page; and a translation processing unit adapted to look up a translation entry whose virtual address tag matches a to-be-translated virtual address in the plurality of translation entries based on the page size represented by the size flag bit of the translation entry, and translate the virtual address into a physical address based on the matching translation entry.Type: GrantFiled: September 16, 2020Date of Patent: October 18, 2022Assignee: Alibaba Group Holding LimitedInventors: Ziyi Hao, Xiaoyan Xiang, Feng Zhu
-
Patent number: 11474717Abstract: Memory systems include a first semiconductor memory module and a processor. The processor is configured to access the first semiconductor memory module by units of a page, and further configured to respond to an occurrence of a page fault in a specific page, which is associated with a virtual address corresponding to an access target, by adjusting a number of pages and allocating pages in the first semiconductor memory module corresponding to the adjusted number of the pages, which are associated with the virtual address.Type: GrantFiled: October 28, 2020Date of Patent: October 18, 2022Inventors: Yongjun Yu, Insu Choi, Dae-Jeong Kim, Sung-Joon Kim, Wonjae Shin
-
Patent number: 11461013Abstract: A memory system includes: a plurality of memory devices; a plurality of cores suitable for controlling the plurality of memory devices, respectively; and a controller including: a host interface layer for providing any one of the cores with a request of a host based on mapping between logical addresses and the cores, a remap manager for changing the mapping between the logical addresses and the cores in response to a trigger, a data swapper for swapping data between the plurality of memory devices based on the changed mapping, and a state manager for determining a state of the memory system depending on whether the data swapper is swapping the data or has completed swapping the data, and providing the remap manager with the trigger based on the state of the memory system and a difference in a degree of wear between the plurality of memory devices.Type: GrantFiled: September 29, 2020Date of Patent: October 4, 2022Assignee: SK hynix Inc.Inventors: Hee Chan Shin, Young Ho Ahn, Yong Seok Oh, Do Hyeong Lee, Jae Gwang Lee
-
Patent number: 11449418Abstract: A method for operating a controller for controlling a memory device including memory blocks, the method includes: determining candidate blocks based on erase counts of the memory blocks; determining a victim block among the candidate blocks based on data update counts of logical addresses associated with a plurality of pages in each of the candidate blocks; and moving data of the victim block into a destination block.Type: GrantFiled: August 25, 2020Date of Patent: September 20, 2022Assignee: SK hynix Inc.Inventor: Seung Won Yang
-
Patent number: 11442854Abstract: Described apparatuses and methods balance memory-portion accessing. Some memory architectures are designed to accelerate memory accesses using schemes that may be at least partially dependent on memory access requests being distributed roughly equally across multiple memory portions of a memory. Examples of such memory portions include cache sets of cache memories and memory banks of multibank memories. Some code, however, may execute in a manner that concentrates memory accesses in a subset of the total memory portions, which can reduce memory responsiveness in these memory types. To account for such behaviors, described techniques can shuffle memory addresses based on a shuffle map to produce shuffled memory addresses. The shuffle map can be determined based on a count of the occurrences of a reference bit value at bit positions of the memory addresses. Using the shuffled memory address for memory requests can substantially balance the accesses across the memory portions.Type: GrantFiled: October 14, 2020Date of Patent: September 13, 2022Assignee: Micron Technology, Inc.Inventor: David Andrew Roberts
-
Patent number: 11442649Abstract: A method according to one embodiment includes identifying a request to migrate data associated with a volume from a source storage pool to a destination storage pool, identifying volume segment table (VST) entries corresponding to rank extents within the source storage pool containing the data, allocating and synchronizing small VSTs for the identified VST entries within the volume, allocating one or more rank extents within the destination storage pool, transferring the data associated with the volume from the rank extents within the source storage pool containing the data to the one or more rank extents in the one or more ranks of the destination storage pool, updating the small VSTs to correspond to the transferred data in the one or more rank extents in the one or more ranks of the destination storage pool, and freeing the data from the one or more rank extents within the source storage pool.Type: GrantFiled: December 21, 2020Date of Patent: September 13, 2022Assignee: International Business Machines CorporationInventors: Hui Zhang, Clint A. Hardy, Karl A. Nielsen, Matthew J. Kalos, Qiang Xie
-
Patent number: 11435929Abstract: A system update appliance includes a processor and a memory device with a Content Addressable Storage (CAS) space and a location addressable storage space. The location addressable storage space partitioned into an object storage space and a device storage space. The processor stores a device entry in the device storage space. The device entry is associated with a device external to the system update appliance and includes a component entry for a component of the device. The component operates based on an update. The component entry includes a description of the component and a pointer to a record stored in the CAS space. The processor stores the record in the CAS space. The record is associated with a combination of the component and the first update. The record includes the description, a second pointer to an update repository, and a third pointer to the object storage space.Type: GrantFiled: August 27, 2020Date of Patent: September 6, 2022Assignee: Dell Products L.P.Inventors: Vaideeswaran Ganesan, Hemant Gaikwad, Pravin Janakiram
-
Patent number: 11429612Abstract: An address search circuit of a semiconductor memory apparatus may include a first search interface configured to receive a search command, generate a first signal when a reference count of the target logical address is less than a threshold value, and generate a second signal when the reference count of the target logical address is equal to or more than the threshold value, a second search interface configured to receive map data whose respective reference counts are less than the threshold value in response to the first signal, a search memory configured to store map data whose respective reference counts are equal to or more than the threshold value, a first search buffer configured to store the map data received through the second search interface, and receive map data in response to the second signal; and a search engine configured to select map data by searching the map data.Type: GrantFiled: April 6, 2020Date of Patent: August 30, 2022Assignee: SK hynix Inc.Inventors: Joung Young Lee, Dong Sop Lee
-
Patent number: 11429533Abstract: A method of reducing FTL address mapping space, including: S1, obtaining a mpa and an offset according to a logical page address; S2, determining whether the mpa is hit in a cache; S3, determining whether a NAND is written into the mpa; S4, performing a nomap load operation, and returning an invalid mapping; S5, performing a map load operation; S6, directly searching a mpci representing a position of the mpa in the cache and searching a physical page address gppa with reference to the offset; S7, determining whether a mapping from a logical address to a physical address needs to be modified; S8, modifying a mapping table corresponding to the mpci in the cache, and marking a mp corresponding to the mpci as a dirty mp; S9, determining whether to trigger a condition of writing the mp into the NAND; and S10, writing the dirty mp into the NAND.Type: GrantFiled: September 29, 2020Date of Patent: August 30, 2022Assignee: SHENZHEN UNIONMEMORY INFORMATION SYSTEM LIMITEDInventors: Jian Zuo, Yuanyuan Feng, Zhiyuan Leng, Jintao Gan, Weiliang Wang, Zongming Jia
-
Patent number: 11422946Abstract: Systems, apparatuses, and methods for implementing translation lookaside buffer (TLB) striping to enable efficient invalidation operations are described. TLB sizes are growing in width (more features in a given page table entry) and depth (to cover larger memory footprints). A striping scheme is proposed to enable an efficient and high performance method for performing TLB maintenance operations in the face of this growth. Accordingly, a TLB stores first attribute data in a striped manner across a plurality of arrays. The striped manner allows different entries to be searched simultaneously in response to receiving an invalidation request which identifies a particular attribute of a group to be invalidated. Upon receiving an invalidation request, the TLB generates a plurality of indices with an offset between each index and walks through the plurality of arrays by incrementing each index and simultaneously checking the first attribute data in corresponding entries.Type: GrantFiled: August 31, 2020Date of Patent: August 23, 2022Assignee: Apple Inc.Inventors: John D. Pape, Brian R. Mestan, Peter G. Soderquist
-
Patent number: 11416388Abstract: A system includes a memory device and a processing device coupled to the memory device. The processing device can determine a data rate from a first sensor and a data rate from a second sensor. The processing device can write a first set of data received from the first sensor at a first logical block address (LBA) in the memory device. The processing device can write a second set of data received from the second sensor and subsequent to the first set of data at a second LBA in the memory device. The processing device can remap the first LBA and the second LBA to be logically sequential LBAs. The second LBA can be associated with an offset from the first LBA and the offset can correspond to a data rate of the first sensor.Type: GrantFiled: September 22, 2020Date of Patent: August 16, 2022Assignee: Micron Technology, Inc.Inventors: Kishore K. Muchherla, Vamsi Pavan Rayaprolu, Karl D. Schuh, Jiangang Wu, Gil Golov
-
Patent number: 11397680Abstract: A technique is provided for controlling eviction from a storage structure. An apparatus has a storage structure with a plurality of entries to store data. The apparatus also has eviction control circuitry configured to maintain eviction control information in accordance with an eviction policy, the eviction policy specifying how the eviction control information is to be updated in response to accesses to the entries of the storage structure. The eviction control circuitry is responsive to a victim selection event to employ the eviction policy to select, with reference to the eviction control information, one of the entries to be a victim entry whose data is to be discarded from the storage structure. The eviction control circuitry is further configured to maintain, for each of one or more groups of entries in the storage structure, an indication of a most-recent entry. The most-recent entry is an entry in that group that was most recently subjected to at least a given type of access.Type: GrantFiled: October 6, 2020Date of Patent: July 26, 2022Assignee: Arm LimitedInventor: Joseph Michael Pusdesris