Patents Examined by Michael Krofcheck
  • Patent number: 11775443
    Abstract: A system includes a central processing unit (CPU) to process data with respect to a virtual address generated by the CPU. A first memory management unit (MMU) translates the virtual address to a physical address of a memory with respect to the data processed by the CPU. A supervisory MMU translates the physical address of the first MMU to a storage address for storage and retrieval of the data in the memory. The supervisory MMU controls access to the memory via the storage address generated by the first MMU.
    Type: Grant
    Filed: October 23, 2014
    Date of Patent: October 3, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Derek Alan Sherlock
  • Patent number: 11775473
    Abstract: A system for data migration is disclosed. The system may receive a migration request comprising a source file path and a target file location. The system may capture source file metadata based on the source file path and the migration request. The system may transfer a source file from a first data environment to an intermediate data environment via a first transfer process. The system may transfer the source file from the intermediate data environment to a second data environment via a second transfer process.
    Type: Grant
    Filed: April 25, 2022
    Date of Patent: October 3, 2023
    Assignee: AMERICAN EXPRESS TRAVEL RELATED SERVICES COMPANY, INC.
    Inventors: Arindam Chatterjee, Pratyush Kotturu, Pratap Singh Rathore, Brian C. Rosenfield, Nitish Sharma, Swatee Singh, Mohammad Torkzahrani
  • Patent number: 11762567
    Abstract: Devices, methods, and media are described for runtime memory allocation to avoid defects. One embodiment includes assigning a plurality of memory blocks of a memory sub-system to a plurality of erase groups, such that each erase group of the plurality of erase groups comprises two or more memory blocks of the plurality of memory blocks. A bad block association is determined for each erase group of the plurality of erase groups. Prior to a memory condition being met, memory resources of the memory sub-system are allocated by erase group based on a first set of criteria which are based at least in part on the bad block association for each erase group in order to prioritize use of erase groups with fewer bad blocks.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: September 19, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Woei Chen Peh, Eng Hong Tan, Andrew M. Kowles, Xiaoxin Zou, Zaihas Amri Fahdzan Bin Hasfar
  • Patent number: 11762768
    Abstract: A device is provided that includes a first memory and a second memory and an accessing circuit. Actual addresses of the first memory and the second memory alternately correspond to reference addresses of a processing circuit. The accessing circuit is configured to perform the steps outlined below. A read command corresponding to a reference read address is received from the processing circuit to convert the reference read address to an actual read address of the first memory and the second memory. A first read data is read from a first one of the first memory and the second memory according to the actual read address and a second read data is prefetched from a second one of the first memory and a second memory according to a next first read address simultaneously.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: September 19, 2023
    Assignee: REALTEK SEMICONDUCTOR CORPORATION
    Inventors: Yung-Hui Yu, Chih-Wea Wang
  • Patent number: 11755488
    Abstract: Systems, apparatuses, and methods for predictive memory access are described. Memory control circuitry instructs a memory array to read a data block from or write the data block to a location targeted by a memory access request, determines memory access information including a data value correlation parameter determined based on data bits used to indicate a raw data value in the data block and/or an inter-demand delay correlation parameter determined based on a demand time of the memory access request, predicts that read access to another location in the memory array will subsequently be demanded by another memory access request based on the data value correlation parameter and/or the inter-demand delay correlation parameter, and instructs the memory array to output another data block stored at the other location to a different memory level that provides faster data access speed before the other memory access request is received.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: September 12, 2023
    Assignee: Micron Technology, Inc.
    Inventor: David Andrew Roberts
  • Patent number: 11726911
    Abstract: The present disclosure generally relates to data storage devices, such as solid state drives (SSDs), and efficient data storage device operations related to power loss incidents. A controller of the data storage device is configured to periodically pre-encode data that is stored in random access memory (RAM), detect a power loss event, and program the data and parity data to non-volatile memory (NVM) in response to detecting the power loss event. Upon reaching a threshold size, the data in RAM may be pre-encoded and the pre-encoded data can be programmed to the RAM or the NVM. The parity data may be stored in one or more locations of the NVM. Upon detecting a power loss event, any data remaining in RAM that is not pre-encoded is encoded. The data and any parity data not yet programmed to the NVM are programmed to the NVM.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: August 15, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventors: Dudy David Avraham, Ran Zamir
  • Patent number: 11726681
    Abstract: A method for converting an electronic flash storage device having a byte addressable storage (ByAS) and a block addressable flash storage (BlAS) to a single byte addressable storage includes receiving, by a host, a request for memory allocation from the ByAS, the receiving being from a first application among of a plurality of applications running on a processor; deallocating, by the host, a least relevant page allocated to at least one second application among the plurality of applications; moving, by the host, a content to the BlAS at a first BlAS location, the content related to the least relevant page, the moving based on the deallocation; allocating, by the host, the least relevant page to the first application; and updating, by the host, a cache metadata and a page lookup table of the first application and the at least one second application based on the deallocation and allocation.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: August 15, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Arun George, Anshul Sharma, Rajesh Krishnan, Vishak G
  • Patent number: 11726658
    Abstract: Techniques involve: determining a first group of storage disks, a use rate of each storage disk of the first group of storage disks exceeding a first threshold, the first group of storage disks comprising a first group of storage blocks corresponding to a first redundant array of independent storage disk (RAID); allocating a second group of storage blocks corresponding to a second RAID from a second group of storage disks, the second group of storage blocks having the same size as that of the first group of storage blocks, a use rate of each storage disk of the second group of storage disks being under a second threshold; moving data in the first group of storage blocks to the second group of storage blocks; and releasing the first group of storage blocks from the first group of storage disks. Thus, use rates of the storage disks become more balanced.
    Type: Grant
    Filed: January 24, 2020
    Date of Patent: August 15, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Xiaobo Zhang, Xinlei Xu, Shaoqin Gong, Baote Zhuo, Shuai Ni, Jian Gao
  • Patent number: 11720276
    Abstract: A memory system includes a storage medium and a controller. The storage medium includes a plurality of physical regions. The controller maps logical regions which are configured by a host device, to the physical regions, and performs in response to a write request for a target logical region, a write operation on a physical region mapped to the target logical region. The controller updates in response to the write request, a write status corresponding to the target logical region within a write status table.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: August 8, 2023
    Assignee: SK hynix Inc.
    Inventor: Eu Joon Byun
  • Patent number: 11709623
    Abstract: A storage system includes a NAND storage media and a nonvolatile storage media as a write buffer for the NAND storage media. The write buffer is partitioned, where the partitions are to buffer write data based on a classification of a received write request. Write requests are placed in the write buffer partition with other write requests of the same classification. The partitions have a size at least equal to the size of an erase unit of the NAND storage media. The write buffer flushes a partition once it has an amount of write data equal to the size of the erase unit.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: July 25, 2023
    Assignee: SK hynix NAND Product Solutions Corp.
    Inventors: Michal Wysoczanski, Kapil Karkra, Piotr Wysocki, Anand S. Ramalingam
  • Patent number: 11704038
    Abstract: Improving performance of garbage collection (GC) processes in a deduplicated file system having a layered processing architecture that maintains a log structured file system storing data and metadata in an append-only log arranged as a monotonically increasing log data structure of a plurality of data blocks wherein a head of the log increases in chronological order and no allocated data block is overwritten. The storage layer reserves a set of data block IDs within the log specifically for the garbage collection process, and assigns data blocks from the reserved set to GC I/O processes requiring acknowledgment in a possible out-of-order manner relative to an order of data blocks in the log. It strictly imposes using in-order I/O acknowledgement for other non-GC processes using the storage layer, where these processes may be deduplication backup processes using a segment store layer at the same protocol level as the GC layer.
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: July 18, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Ashwani Mujoo, Ramprasad Chinthekindi, Abhinav Duggal
  • Patent number: 11704244
    Abstract: A computer-implemented method for synchronizing local caches is disclosed. The method may include receiving a content update which is an update to a data entry stored in local caches of each of a plurality of remote servers. The method may include transmitting the content update to a first remote server to update a corresponding data entry in a local cache of the first remote server. Further, the method may include generating an invalidation command, indicating the change in the corresponding data entry. The method may include transmitting the invalidation command from the first remote server to the message server. The method may include generating, by the message server, a plurality of partitions based on the received invalidation command. The method may include transmitting, from the message server to each of the remote servers, the plurality of partitions, so that the remote servers update their respective local caches.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: July 18, 2023
    Assignee: Coupang Corp.
    Inventors: Seokhyun Kim, Yixiang Huang
  • Patent number: 11698758
    Abstract: Methods and systems for selectively compressing data lines of a memory device in selective compression circuitry. The selective compression circuitry receives multiple data lines and compression circuitry that selectively compresses inputs. The selective compression circuitry also includes control circuitry to receive data over via the data lines. The control circuitry, when in a compressed mode, transmits data from each of the data lines to the compression circuitry. Alternatively, in an uncompressed mode, the control circuitry transmits data from a first subset of the data lines to the compression circuitry while blocking data from a second subset of the data lines from being transmitted to the compression circuitry.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: July 11, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Loon Ming Ho
  • Patent number: 11645208
    Abstract: A computer system includes a processor and a prefetch engine. The processor is configured to generate a demand access stream. The prefetch engine is configured to generate a first prefetch request and a second prefetch request based on the demand access stream, to output the first prefetch request to a first translation lookaside buffer (TLB), and to output the second prefetch request to a second TLB that is different from the first TLB. The processor performs a first TLB lookup in the first TLB based on one of the demand access stream or the first prefetch request, and performs a second TLB lookup in the second TLB based on the second prefetch request.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: May 9, 2023
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Bryan Lloyd, George W. Rohrbaugh, III, Vivek Britto, Mohit Karve
  • Patent number: 11645233
    Abstract: Embodiments of the present invention relate to methods, systems, and computer program products for file management in a distributed file cache system. In some embodiments, a method is disclosed. According to the method, responsive to determining that at least one client node is obtaining a file of a first version stored at a storage node, one or more processors generate contact information indicating that the file of the first version is accessible from the storage node and the at least one client node and recorded the contact information into a distributed hash table. The storage node and the at least one client node are included in a plurality of nodes associated with the distributed hash table. Further, one or more processors generate first version information indicating that the file is of the first version and record the first version information into a blockchain associated with the plurality of nodes.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: May 9, 2023
    Assignee: International Business Machines Corporation
    Inventors: Cheng Yong Zhao, Jia Min Li, Zhong Shi Lu, Ze Rui Yuan, Yan Liang Qiao
  • Patent number: 11640361
    Abstract: According to one or more embodiments of the present invention, a computer implemented method includes receiving a secure access request for a secure page of memory at a secure interface control of a computer system. The secure interface control can check a disable virtual address compare state associated with the secure page. The secure interface control can disable a virtual address check in accessing the secure page to support mapping of a plurality of virtual addresses to a same absolute address to the secure page based on the disable virtual address compare state being set and/or to support secure pages that are accessed using an absolute address and do not have an associated virtual address.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: May 2, 2023
    Assignee: International Business Machines Corporation
    Inventors: Fadi Y. Busaba, Lisa Cranton Heller, Jonathan D. Bradbury
  • Patent number: 11635906
    Abstract: The present disclosure includes apparatuses and methods for acceleration of data queries in memory. An example apparatus includes an array of memory cells and processing circuitry. The processing circuitry is configured to receive, from a host, a query for particular data in the array of memory cells, wherein the particular data corresponds to a search key generated by the host, search portions of the array of memory cells for the particular data corresponding to the search key generated by the host, search portions of the array of memory cells for the particular data corresponding to the search key, determine data stored in the portions of the array of memory cells that corresponds more closely to the search key than other data stored in the portions of the array of memory cells, and transfer the data that corresponds more closely to the search key than the other data to the host.
    Type: Grant
    Filed: August 4, 2020
    Date of Patent: April 25, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Mark A. Helm, Joseph T. Pawlowski
  • Patent number: 11630771
    Abstract: An apparatus includes multiple processors including respective cache memories, the cache memories configured to cache cache-entries for use by the processors. At least a processor among the processors includes cache management logic that is configured to (i) receive, from one or more of the other processors, cache-invalidation commands that request invalidation of specified cache-entries in the cache memory of the processor (ii) mark the specified cache-entries as intended for invalidation but defer actual invalidation of the specified cache-entries, and (iii) upon detecting a synchronization event associated with the cache-invalidation commands, invalidate the cache-entries that were marked as intended for invalidation.
    Type: Grant
    Filed: July 13, 2021
    Date of Patent: April 18, 2023
    Assignee: APPLE INC.
    Inventors: John D Pape, Mahesh K Reddy, Prasanna Utchani Varadharajan, Pruthivi Vuyyuru
  • Patent number: 11604590
    Abstract: In one aspect of metadata track entry sorting in accordance with the present description, recovery logic sorts a list of metadata entries as a function of a source data track identification of each metadata entry to provide a second, sorted list of metadata entries, and generates a recovery volume which includes data tracks which are a function of one or more data target tracks identified by the sorted list of metadata entries. Because the metadata entry contents of the sorted list have been sorted as a function of source track identification number, the particular time version of a particular source track may be identified more quickly and more efficiently. As a result, recovery from data loss may be achieved more quickly and more efficiently thereby providing a significant improvement in computer technology. Other features and aspects may be realized, depending upon the particular application.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: March 14, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Theresa M. Brown, David Fei, Gregory E. McBride
  • Patent number: 11599479
    Abstract: A self-encrypting drive (SED) comprises an SED controller and a nonvolatile storage medium (NVSM) responsive to the SED controller. The SED controller enables the SED to perform operations comprising: (a) receiving an encrypted media encryption key (eMEK) for a client; (b) decrypting the eMEK into an unencrypted media encryption key (MEK); (c) receiving a write request from the client, wherein the write request includes data to be stored and a key tag value associated with the MEK; (d) using the key tag value to select the MEK for the write request; (e) using the MEK for the write request to encrypt the data from the client; and (f) storing the encrypted data in a region of the NVSM allocated to the client. Other embodiments are described and claimed.
    Type: Grant
    Filed: May 8, 2019
    Date of Patent: March 7, 2023
    Assignee: Intel Corporation
    Inventors: Adrian Robert Pearson, David Ray Noeldner, Niels Juel Reimers, Emily Po-Kay Chung, Gamil Assudan Cain, Thomas Rodel Bowen, Teddy Gordon Greer, Jonathan Martin Hughes