Patents Examined by Hong C Kim
  • Patent number: 10936503
    Abstract: A method for managing metadata in a scale out storage system is disclosed. The system includes a plurality of nodes, a storage pool, first metadata that maps logical addresses of logical data blocks to corresponding content identifiers, and second metadata that maps content identifiers to corresponding physical addresses of physical data blocks in the storage pool and maintains a reference count. During an add-a-node operation, the processors are configured to move from the existing nodes to the new node some of its physical data blocks, their content identifiers and reference counts in the second metadata without accessing or altering the first metadata. A method is disclosed to move a logical device from one node to another by de-activating the logical device's first metadata on the first node and activating and retrieving the logical device's first metadata on the second node.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: March 2, 2021
    Assignee: ORCA DATA TECHNOLOGY (XI'AN) CO., LTD
    Inventors: Arthur James Beaverson, Bang Chang
  • Patent number: 10929067
    Abstract: According to one embodiment, a memory system determines a write destination block and a write destination location in the write destination block to which write data is to be written, and notifies a host of an identifier of the write data, a block address of the write destination block, and an offset indicative of the write destination location. The memory system retrieves the write data from a write buffer of the host, and writes the write data to the write destination location. In a case where a read command to designate a physical address of first data is received before a write operation of the first data is finished, the memory system reads the first data from the write buffer of the host.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: February 23, 2021
    Assignee: TOSHIBA MEMORY CORPORATION
    Inventor: Shinichi Kanno
  • Patent number: 10929020
    Abstract: An information processing device includes a control unit functioning as a receiving unit, a processing unit, and a storage controller, a storage, and a communication unit. The storage controller stores data in the storage when the receiving unit receives a predetermined storage instruction, transmits the data to a predetermined storage device among a plurality of storage devices, and stores the transmitted data in the predetermined storage device.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: February 23, 2021
    Assignee: KYOCERA Document Solutions Inc.
    Inventors: Shoichi Sakaguchi, Yoshiyuki Fujiwara, Yoshihisa Tanaka, Yoshiki Yoshioka, Tetsuya Nishino, Seiji Onishi
  • Patent number: 10915259
    Abstract: A memory system may include: a memory device storing data and including a memory interface in communication with a memory controller; and the memory controller controlling the memory device and including a controller interface in communication with the memory device, and, wherein, when the memory device is inaccessible, the memory controller requests a current state information including current operation mode of the memory interface from the memory device, and changes an operation mode of the controller interface to match the current operation mode of the memory interface according to the current state information received from the memory device.
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: February 9, 2021
    Assignee: SK hynix Inc.
    Inventor: Jee-Yul Kim
  • Patent number: 10915459
    Abstract: A computer system includes a translation lookaside buffer (TLB) and a processor. The TLB comprises a first TLB array and a second TLB array, and stores entries comprising virtual address information and corresponding real address information. The processor is configured to receive a first virtual address for translation, and to concurrently determine if the TLB stores a physical address associated with the first virtual address based on a first portion and a second portion of the first virtual address. The first portion is associated with a first page size and the second portion is associated with a second page size (different from the first page size). The first portion is used to perform lookup in either one of the first TLB array and the second TLB array and the second portion is used for performing lookup in other one of the first TLB array and the second TLB array.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: February 9, 2021
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Dwain A. Hicks
  • Patent number: 10901894
    Abstract: A heterogeneous memory system is implemented using a low-latency near memory (NM) and a high-latency far memory (FM). Pages in the memory system include NM blocks stored in the NM and FM blocks stored in the FM. A page is assigned to a region in the memory system based on the proportion of NM blocks in the page. When accessing a block, the block address is used to determine a region of the memory system, and a block offset is used to determine whether the block is stored in NM or FM. The memory system may observe memory accesses to determine the access statistics of the page and the block. Based on a page's hotness and access density, the page may be migrated to a different region. Based on a block's hotness, the block may be migrated between NM and FM allocated to the page.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: January 26, 2021
    Assignee: Oracle International Corporation
    Inventors: Lizy John, Jee Ho Ryoo, Hung-Ming Hsu, Karthik Ganesan
  • Patent number: 10896141
    Abstract: In one embodiment, a cache memory includes: a plurality of data banks, each of the plurality of data banks having a plurality of entries each to store a portion of a cache line distributed across the plurality of data banks; and a plurality of tag banks decoupled from the plurality of data banks, wherein a tag for a cache line is to be assigned to one of the plurality of tag banks. Other embodiments are described and claimed.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: January 19, 2021
    Assignee: Intel Corporation
    Inventors: Jeffrey J. Cook, Jonathan D. Pearce, Srikanth T. Srinivasan, Rishiraj A. Bheda, David B. Sheffield, Abhijit Davare, Anton Alexandrovich Sorokin
  • Patent number: 10892003
    Abstract: Memory device systems, systems and methods are disclosed, such as those involving a plurality of stacked memory device dice and a logic die connected to each other through a plurality of conductors. The logic die serves, for example, as a memory interface device to a memory access device, such as a processor. The logic die can include a command register that allows selective operation in either of two modes. In a direct mode, conventional command signals as well as row and column address signals are applied to the logic die, and the logic die can essentially couple these signals directly to the memory device dice. In an indirect mode, a packet containing a command and a composite address are applied to the logic die, and the logic die can decode the command and composite address to apply conventional command signals as well as row and column address signals to the memory device dice.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: January 12, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Joseph M. Jeddeloh
  • Patent number: 10884959
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for a system-level cache to allocate cache resources by a way-partitioning process. One of the methods includes maintaining a mapping between partitions and priority levels and allocating primary ways to respective enabled partitions in an order corresponding to the respective priority levels assigned to the enabled partitions.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: January 5, 2021
    Assignee: Google LLC
    Inventors: Vinod Chamarty, Xiaoyu Ma, Hongil Yoon, Keith Robert Pflederer, Weiping Liao, Benjamin Dodge, Albert Meixner, Allan Douglas Knies, Manu Gulati, Rahul Jagdish Thakur, Jason Rupert Redgrave
  • Patent number: 10853237
    Abstract: A method, computer program product, and computer system for receiving, at a first computing device, a first data chunk sent from a second computing device. It may be determined that the first data chunk includes a first type of data. The first data chunk may be stored to a cache operatively coupled to the first computing device based upon, at least in part, determining that the first data chunk includes the first type of data, wherein the cache may include a first storage device type. An acknowledgement of a successful write of the first data chunk to the second computing device may be sent based upon, at least in part, a successful storing of the first data chunk to the cache operatively coupled to the first computing device.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: December 1, 2020
    Assignee: EMC IP Holding Company, LLC
    Inventors: Mikhail Danilov, Andrey Fomin, Alexander Rakulenko, Mikhail Malygin, Chen Wang
  • Patent number: 10853240
    Abstract: A memory system may include: a memory device including a plurality of dies; and a controller suitable for controlling the memory device, wherein the controller includes: a buffer including a plurality of entries suitable for temporarily storing target data; a monitor suitable for comparing a size of the target data with a predetermined threshold value; a buffer manager suitable for determining, when the size of the target data is equal to or greater than the predetermined threshold value, a skip value based on physical information of the memory device, and storing a start entry and an end entry in which the target data is stored; and a processor suitable for controlling the memory device to perform a program operation on the target data through an interleaving programming method based on the start entry, the end entry, and the skip value.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: December 1, 2020
    Assignee: SK hynix Inc.
    Inventor: Eu-Joon Byun
  • Patent number: 10847204
    Abstract: An apparatus comprises first and second memory regions each to store data using a data storage technology for which retention of data for longer than a predetermined period of time is dependent on a refresh operation for refreshing data in the memory region being performed at a frequency that is greater than or equal to a minimum refresh frequency. The apparatus further comprises at least one controller to control storage of data in the first memory region with the refresh operation performed at a first frequency lower than said minimum refresh frequency when valid data is stored in the first memory region, and to control storage of data in the second memory region with the refresh operation performed at a second frequency that is greater than or equal to said minimum refresh frequency. The at least one controller is configured to communicate with the first memory region via a first memory channel and with the second memory region via a second memory channel.
    Type: Grant
    Filed: February 2, 2018
    Date of Patent: November 24, 2020
    Assignee: ARM LIMITED
    Inventor: Wei Wang
  • Patent number: 10846004
    Abstract: A memory management system includes a memory, a processor, a memory access monitoring module and a memory management module. The processor is used to access the memory. The memory access monitoring module includes a first terminal coupled to the processor, and a second terminal coupled to the memory. The memory access monitoring module is used to monitor whether the processor has accessed the memory so as to generate monitor data. The memory management module is used to receive the monitor data and predict when the memory is to be accessed according to at least the monitor data.
    Type: Grant
    Filed: May 2, 2019
    Date of Patent: November 24, 2020
    Assignee: MEDIATEK INC.
    Inventors: Chia-Wei Chang, Shih-Hung Yu, Chieh-Lin Chuang
  • Patent number: 10838868
    Abstract: Embodiments for implementing a communicating memory between a plurality of computing components are provided. In one embodiment, an apparatus comprises a plurality of memory components residing on a processing chip, the plurality of memory components interconnected between a plurality of processing elements of at least one processing core of the processing chip and at least one external memory component external to the processing chip. The apparatus further comprises a plurality of load agents and a plurality of store agents on the processing chip, each interfacing with the plurality of memory components. Each of the plurality of load agents and the plurality of store agents execute an independent program specifying a destination of data transacted between the plurality of memory components, the at least one external memory component, and the plurality of processing elements.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: November 17, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Chia-Yu Chen, Jungwook Choi, Brian Curran, Bruce Fleischer, Kailash Gopalakrishan, Jinwook Oh, Sunil K Shukla, Vijayalakshmi Srinivasan, Swagath Venkataramani
  • Patent number: 10838829
    Abstract: A data loading method and device, where the method includes obtaining a data loading request of a virtual machine after the virtual machine is started, the data loading requesting to load target data in an image file, determining whether the target data is stored in a volume and a snapshot corresponding to the virtual machine, where the snapshot is obtained based on a blank volume corresponding to the virtual machine when the virtual machine is created, writing the target data from a mirror server into the snapshot when the target data is not stored in the volume or the snapshot, reading the target data and transferring the target data to the virtual machine, obtaining virtual machine data generated by the virtual machine, and writing the virtual machine data into the volume. Hence, a conflict between new data and old data in a data loading process is resolved.
    Type: Grant
    Filed: January 2, 2019
    Date of Patent: November 17, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Mingjun Li
  • Patent number: 10838656
    Abstract: A system is provided to manage on-chip memory access for multiple threads. The system comprises multiple parallel processing units to execute the threads, and an on-chip memory including multiple memory units and each memory unit includes a first region and a second region. The first region and the second region have different memory addressing schemes for parallel access by the threads. The system further comprises an address decoder coupled to the parallel processing units and the on-chip memory. The address decoder is operative to activate access by the threads to memory locations in the first region or the second region according to decoded address signals from the parallel processing units.
    Type: Grant
    Filed: August 12, 2017
    Date of Patent: November 17, 2020
    Assignee: MediaTek Inc.
    Inventors: Po-Chun Fan, Pei-Kuei Tsung, Sung-Fang Tsai, Chia-Hsien Chou, Shou-Jen Lai
  • Patent number: 10831388
    Abstract: A method and a system for permanently deleting data from storage. The method includes receiving a wipe command to permanently delete a data segment stored in a storage system. The data segment includes an address to blocks where the data of the data segment is stored. The method also includes sanitizing the data segment, marking the address as sanitized, locating a last journal entry in a journal. The last journal entry includes metadata regarding the data segment. The method also includes sanitizing the last journal entry, traversing the journal, and sanitizing each journal entry of the data segment.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Ben Sasson, Miles Mulholland, Lee Jason Sanders, Gordon Douglas Hutchison
  • Patent number: 10824348
    Abstract: A secure memory (145) is disclosed. The memory (145) may include data storage (310, 315, 320, 325, 330, 335, 340, 345) for data, along with a data read logic (405) and a data write logic (410) to read and write data from the data storage (310, 315, 320, 325, 330, 335, 340, 345). A password storage (355) may store a stored password (510). A receiver may receive a received password (505) from a memory controller (205). A comparator may compare the received password (505) with the stored password (510). An erase logic (435) may erase data in the data storage (310, 315, 320, 325, 330, 335, 340, 345) if the received password (505) does not match the stored password (510). Finally, a block logic (425) may block access to the memory (145) from the memory controller (205) until after the comparator (430) completes its operation.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: November 3, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sompong Paul Olarig, Mu-Tien Chang
  • Patent number: 10824350
    Abstract: A data processing apparatus and method serve to manage access permission checking in respect of contingent memory access operations (the access permission failure of which does not alter program flow) in dependence of a contingent-access permission checking disable flag. If the contingent access disable flag has a first value, then this disables memory permission circuitry e.g. a walk state machine 22, from performing a check as to whether or not the memory access circuitry is permitted to perform a requested memory access. Non-contingent memory accesses are able to utilise the memory permission circuitry irrespective of the value of the contingent-access permission checking disable flag.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: November 3, 2020
    Assignee: ARM Limited
    Inventors: Nigel John Stephens, Grigorios Magklis
  • Patent number: 10817412
    Abstract: Apparatuses and methods for adaptive control of memory are disclosed. One example apparatus includes a memory configured to store information. Memory of the memory is configured with two or more information depth maps. The example apparatus further includes a memory translation unit (MTU) configured to support an intermediate depth map of the memory during the migration of the information stored at the memory from a first information depth map of the two or more information depth maps to a second information depth map of the two or more information depth by maintaining mapping tables. The MTU is further configured to provide a mapped address associated with a requested address of a memory access request to the memory based on the mapping tables.
    Type: Grant
    Filed: July 9, 2018
    Date of Patent: October 27, 2020
    Assignee: Micron Technology, Inc.
    Inventors: David A. Roberts, J. Thomas Pawlowski, Robert Walker