Patents Examined by Denise Tran
  • Patent number: 11188464
    Abstract: Methods and systems for self-invalidating cachelines in a computer system having a plurality of cores are described. A first one of the plurality of cores, requests to load a memory block from a cache memory local to the first one of the plurality of cores, which request results in a cache miss. This results in checking a read-after-write detection structure to determine if a race condition exists for the memory block. If a race condition exists for the memory block, program order is enforced by the first one of the plurality of cores at least between any older loads and any younger loads with respect to the load that detects the prior store in the first one of the plurality of cores that issued the load of the memory block and causing one or more cache lines in the local cache memory to be self-invalidated.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: November 30, 2021
    Assignee: ETA SCALE AB
    Inventors: Alberto Ros, Stefanos Kaxiras
  • Patent number: 11169930
    Abstract: Systems, methods and apparatuses of fine grain data migration in using Memory as a Service (MaaS) are described. For example, a memory status map can be used to identify the cache availability of sub-regions (e.g., cache lines) of a borrowed memory region (e.g., a borrowed remote memory page). Before accessing a virtual memory address in a sub-region, the memory status map is checked. If the sub-region has cache availability in the local memory, the memory management unit uses a physical memory address converted from the virtual memory address to make memory access. Otherwise, the sub-region is cached from the borrowed memory region to the local memory, before the physical memory address is used.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: November 9, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Dmitri Yudanov, Ameen D. Akel, Samuel E. Bradshaw, Kenneth Marion Curewitz, Sean Stephen Eilert
  • Patent number: 11163693
    Abstract: A method comprising: storing, in a memory, a mapping tree that is implemented by using an array of mapping pages, the mapping tree having a depth of D, wherein D is an integer greater than or equal to 0; receiving a write request that is associated with a first type-1 address; storing, in a storage device, data associated with the write request, the data associated with the write request being stored in the storage device based on a first type-2 address; generating a map entry that maps the first type-1 address to the first type-2 address; calculating a first hash digest of the first type-1 address; and storing the map entry in a first mapping page.
    Type: Grant
    Filed: July 30, 2019
    Date of Patent: November 2, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Vladimir Shveidel, Ronen Gazit, Uri Shabi, Tal Ben-Moshe
  • Patent number: 11158392
    Abstract: Apparatuses and methods for operating mixed mode blocks. One example method can include tracking single level cell (SLC) mode cycles and extra level cell (XLC) mode cycles performed on the mixed mode blocks, maintaining a mixed mode cycle count corresponding to the mixed mode blocks, and adjusting the mixed mode cycle count differently for mixed mode blocks operated in a SLC mode than for mixed blocks operated in a XLC mode.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: October 26, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Kishore K. Muchherla, Ashutosh Malshe, Preston A. Thomson, Michael G. Miller, Gary F. Besinga, Scott A. Stoller, Sampath K. Ratnam, Renato C. Padilla, Peter Feeley
  • Patent number: 11119686
    Abstract: Preservation of data during scaling of a geographically diverse data storage system is disclosed. In regard to scaling-in, a first zone storage component (ZSC) can be placed in read-only (RO) mode to allow continued access to data stored on the first ZSC, completion of previously queued operations, updating of data chunks, etc. Data chunks can comprise metadata stored in directory table partitions organized in a tree data structure scheme. An updated data chunk of the first ZSC can be replicated at other ZSCs before deleting the first ZSC. A first hash function can be used to distribute portions of the updated data chunk among the other ZSCs. A second hash function can be used to distribute key data values corresponding to the distributed portions of the updated data chunk among the other ZSCs.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: September 14, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Mikhail Danilov, Yohannes Altaye
  • Patent number: 11119918
    Abstract: Embodiments of techniques and systems for execution of code with multiple page tables are described. In embodiments, a heterogenous system utilizing multiple processors may use multiple page tables to selectively execute appropriate ones of different versions of executable code. The system may be configured to support use of function pointers to virtual memory addresses. In embodiments, a virtual memory address may be mapped, such as during a code fetch. In embodiments, when a processor seeks to perform a code fetch using the function pointer, a page table associated with the processor may be used to translate the virtual memory address to a physical memory address where code executable by the processor may be found. Usage of multiple page tables may allow the system to support function pointers while utilizing only one virtual memory address for each function that is pointed to. Other embodiments may be described and claimed.
    Type: Grant
    Filed: December 4, 2018
    Date of Patent: September 14, 2021
    Assignee: Intel Corporation
    Inventor: Mike B. Macpherson
  • Patent number: 11106609
    Abstract: A request to read data stored at a memory sub-system can be received. A determination can be made of whether the data is stored at a cache of the memory sub-system. Responsive to determining that the data is not stored at the cache of the memory sub-system, the data can be obtained from a memory component of the memory sub-system. A first priority indicator can be assigned to a fill operation associated with the data that is obtained from the memory component. A second priority indicator can be assigned to the request to read the data. A schedule of executing the fill operation and the request to read the data can be determined based on the first priority indicator and the second priority indicator.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: August 31, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Dhawal Bavishi
  • Patent number: 11106384
    Abstract: A method, computer program product, and computing system for receiving locally-generated original data and remotely-generated replication data on the computing device; initially storing the locally-generated original data in a non-volatile memory system; initially storing the remotely-generated replication data in a volatile memory system; subsequently storing the locally-generated original data in a faster-tier storage system; and subsequently storing the remotely-generated replication data in a slower-tier storage system.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: August 31, 2021
    Assignee: EMC IP Holding Company, LLC
    Inventors: Anton Kucherov, Vamsi Vankamamidi
  • Patent number: 11093398
    Abstract: Embodiments may include systems and methods for performing remote memory operations in a shared memory address space. An apparatus includes a first network controller coupled to a first processor core. The first network controller processes a remote memory operation request, which is generated by a first memory coherency agent based on a first memory operation for an application operating on the first processor core. The remote memory operation request is associated with a remote memory address that is local to a second processor core coupled to the first processor core. The first network controller forwards the remote memory operation request to a second network controller coupled to the second processor core. The second processor core and the second network controller are to carry out a second memory operation to extend the first memory operation as a remote memory operation. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: August 17, 2021
    Assignee: Intel Corporation
    Inventors: Kshitij Doshi, Harald Servat, Francesc Guim Bernat
  • Patent number: 11079964
    Abstract: A memory system may include a semiconductor memory and a memory controller. The memory controller may include an adjustment circuit configured to receive a first signal having a first duty cycle, and intermittently output a second signal to an outside of the memory controller on the basis of a control signal, the second signal having a second duty cycle which is different from the first duty cycle. The memory controller may further include a selector circuit configured to receive the second signal, receive a third signal which is generated on the basis of the second signal, and output a selected one of the second signal and the third signal. The memory controller may further include a control circuit configured to generate the control signal on the basis of the selected one of the second signal and the third signal output from the selector circuit.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: August 3, 2021
    Assignee: TOSHIBA MEMORY CORPORATION
    Inventors: Jianan Wang, Kouichi Tashiro, Kenji Kikuchi
  • Patent number: 11079965
    Abstract: A data processing method for a computer system is provided. The computer system includes a host and an open-channel solid state drive. The open-channel solid state drive is connected with the host. The data processing method includes the following steps. Firstly, plural block characteristic parameters of a specified block in a non-volatile memory of the open-channel solid state drive are collected. Then, the plural block characteristic parameters are inputted into a prediction function, so that a prediction value is acquired. If the prediction value exceeds a threshold value, a data in the specified block of the non-volatile memory is moved to a blank block of the non-volatile memory by the host.
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: August 3, 2021
    Assignee: SOLID STATE STORAGE TECHNOLOGY CORPORATION
    Inventors: Shih-Hung Hsieh, Yu-Cheng Kao, Sung-Hung Wu, I-Hsiang Chiu
  • Patent number: 11068169
    Abstract: A controller of the data storage system may poll a non-volatile memory component to determine an operational status of the non-volatile memory component after a memory operation has been initiated in the non-volatile memory component. The controller may, in response to determining the operational status of the non-volatile memory component is busy, update a polling interval based on a polling factor. The controller may re-poll the non-volatile memory component to determine the operational status of the non-volatile memory component after expiration of the updated polling interval. The controller may repeat the updating of the polling interval and the re-polling of the non-volatile memory component until the operational status of the non-volatile memory component is determined to be ready or until a predetermined number of iterations of the updating and re-polling have been performed if, in response to the re-polling, the operational status is determined to be busy.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: July 20, 2021
    Assignee: Western Digital Technologies, Inc.
    Inventor: Mark Elliott
  • Patent number: 11055217
    Abstract: Techniques are disclosed for identifying multiple sections from one or more tracks of a media file and reading them together in a consumption-driven pipeline process. A render pipeline may comprise a sample generator, a sample buffer, and a destination buffer. Multiple render pipelines may be used for parsing multiple tracks of the media file. An I/O manager may determine that a destination buffer requires new data. The I/O manager may schedule a memory read for a data element from the sample buffer corresponding to the destination buffer and may determine if any of the sample buffers have data elements with memory locations close to the scheduled read. If so, the I/O manager may also schedule those memory locations to be read. After reading, the filled data elements corresponding to the read memory may then be sent to their corresponding destination buffers to be consumed and added to their corresponding tracks.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: July 6, 2021
    Assignee: Apple Inc.
    Inventors: John Samuel Bushell, Mortiz Wittenhagen
  • Patent number: 11042374
    Abstract: Embodiments are disclosed for managing a non-volatile dual in-line memory module (NVDIMM) storage system. The techniques include loading an executable to a volatile random access memory. The techniques also include in response to a store operation attempted by the executable, determining that a target address of the store operation is not mapped from an address in the random access memory to an address in an NVDIMM. The techniques further include mapping the target address from the address in the volatile random access memory to the address in the NVDIMM. Additionally, the techniques include performing the store operation in the address in the NVDIMM based on the mapping.
    Type: Grant
    Filed: May 2, 2019
    Date of Patent: June 22, 2021
    Assignee: International Business Machines Corporation
    Inventors: Carlos Eduardo Seo, Juscelino Candido De Lima Junior, Breno H. Leitao
  • Patent number: 11029848
    Abstract: A file management method, a distributed storage system, and a management node are disclosed. In the distributed storage system, after receiving a file creation request sent by a host for requesting to create a file in a distributed storage system, a management node allocates, to the file, first virtual space from global virtual address space of the distributed storage system, where local virtual address space of each storage node in the distributed storage system is corresponding to a part of the global virtual address space. Then, the management node records metadata of the file, where the metadata of the file includes information about the first virtual space, and the information about the first virtual space is used to point to local virtual address space of a storage node that is used to store the file. Further, the management node sends, the information about the first virtual space to the host.
    Type: Grant
    Filed: November 1, 2018
    Date of Patent: June 8, 2021
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Jun Xu, Junfeng Zhao, Yuangang Wang
  • Patent number: 11003373
    Abstract: A method for managing physical-to-logical address information in a memory system includes determining whether a memory fragment of a memory block is a last memory fragment of the memory block. The method also includes, in response to a determination that the memory fragment is not the last memory fragment of the memory block: performing a write operation on the memory fragment; storing, in cache associated with the memory system, physical-to-logical address information associated with the memory fragment; and, in response to a determination that the cache is full, writing, to a next memory fragment of the memory block, control metadata associated with physical-to-logical address information stored in the cache.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: May 11, 2021
    Assignee: Western Digital Technologies, Inc.
    Inventors: Niraj Srimal, Ramanathan Muthiah
  • Patent number: 10983918
    Abstract: A variety of applications can include systems and methods that utilize a hybrid logical to physical (L2P) caching scheme. A L2P cache and a L2P changelog in a storage device can be controlled for use in write and read operations of a memory system. A page pointer table in the L2P cache can be accessed, for performance of a write operation in the memory system, to obtain a specific physical address mapped to a specified logical block address from a host, where the access is based on the page pointer table loaded into the L2P cache from the L2P changelog. The L2P cache area can be progressively configured with the most frequently accessed page pointer tables in the L2P changelog in the latest host accesses.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: April 20, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Carminantonio Manganelli, Yoav Weinberg, Alberto Sassara, Paolo Papa, Luigi Esposito, Giuseppe D'Eliseo, Angelo Della Monica, Massimo Iaculo
  • Patent number: 10970104
    Abstract: A resource access method applied to a computer and the computer, where the resource access method is performed by a resource controller which is used to implement resource virtualization. The method includes receiving a resource access request of a virtual machine (VM) for a resource, where the resource access request carries a resource virtual address and an identifier of the VM, translating the resource virtual address into a resource physical address using the identifier of the VM and based on a preset resource information mapping relationship, updating the resource virtual address in the resource access request using the resource physical address, and sending an updated resource access request to a to-be-accessed resource corresponding to the resource physical address in order to access the to-be-accessed resource.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: April 6, 2021
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Zihao Yu, Jiuyue Ma, Yungang Bao
  • Patent number: 10942859
    Abstract: A computing system using a bit counter may include a host device; a cache configured to temporarily store data of the host device, and including a plurality of sets; a cache controller configured to receive a multi-bit cache address from the host device, perform computation on the cache address using a plurality of bit counters, and determine a hash function of the cache; a semiconductor device; and a memory controller configured to receive the cache address from the cache controller, and map the cache address to a semiconductor device address.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: March 9, 2021
    Assignees: SK hynix Inc., Korea University Industry Cooperation Foundation
    Inventors: Seonwook Kim, Wonjun Lee, Yoonah Paik, Jaeyung Jun
  • Patent number: 10942860
    Abstract: A computing system using a bit counter may include a host device; a cache configured to temporarily store data of the host device, and including a plurality of sets; a cache controller configured to receive a multi-bit cache address from the host device, perform computation on the cache address using a plurality of bit counters, and determine a hash function of the cache; a semiconductor device; and a memory controller configured to receive the cache address from the cache controller, and map the cache address to a semiconductor device address.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: March 9, 2021
    Assignees: SK hynix Inc., Korea University Industry Cooperation Foundation
    Inventors: Seonwook Kim, Wonjun Lee, Yoonah Paik, Jaeyung Jun