Patents Examined by Yaima Rigol
  • Patent number: 11461247
    Abstract: Address translation circuitry translates a target virtual address specified by a memory access request into a target physical address associated with a selected physical address space. Granule protection information (GPI) loading circuitry loads from a memory system at least one granule protection descriptor providing GPI indicating, for at least one granule of physical addresses, which physical address spaces is allowed access to the at least one granule. GPI compressing circuitry compresses the GPI to generate compressed GPI. A GPI cache to caches the compressed GPI. Filtering circuitry determines, on a hit in the GPI cache, whether the memory access request should be allowed to access the target physical address, based on whether the compressed GPI cached in the GPI cache for the target physical address indicates that the selected physical address space is allowed access to the target physical address. This allows more efficient caching of granule protection information.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: October 4, 2022
    Assignee: Arm Limited
    Inventors: Guillaume Bolbenes, Abhishek Raja
  • Patent number: 11461151
    Abstract: Embodiments of the present invention are directed to a computer-implemented method for controller address contention assumption. A non-limiting example computer-implemented method includes a shared controller receiving a fetch request for data from a first requesting agent, the receiving via at least one intermediary controller. The shared controller performs an address compare using a memory address of the data.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: October 4, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Robert J. Sonnelitter, III, Michael Fee, Craig R. Walters, Arthur O'Neill, Matthias Klein
  • Patent number: 11461225
    Abstract: A storage device comprises a flash memory and processing circuitry. The processing circuitry is configured to divide a storage area into pages to manage the storage area, and deletes each of the blocks including a plurality of pages. The processing circuitry receives a write instruction including address information specifying a writing location of the data, and stores, with respect to a plurality of groups in which each group includes one or more blocks, a plurality of group identification information each identifying a group and information specifying blocks included in the group in association with each other. The processing circuitry performs a predetermined calculation to obtain group identification information, and identifies a group including a block including pages onto which data is to be written according to the write instruction. Finally, the processing circuitry writes the data onto the pages of the block included in the group identified.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: October 4, 2022
    Assignee: BUFFALO INC.
    Inventors: Kazuki Makuni, Shuichiro Azuma, Noriaki Sugahara, Yu Nakase
  • Patent number: 11455106
    Abstract: A storage reclamation orchestrator is implemented to identify and recover unused storage resources on a storage system. The storage reclamation orchestrator analyses storage usage attributes of storage groups occupying storage resources of the storage system. The storage reclamation orchestrator assigns individual usage point values to each storage usage attribute of a given storage group. The individual usage point values are combined to assign a final usage point value to the storage group. Storage groups with usage point values above a threshold are candidate storage groups for recovery on the storage system. Example storage usage attributes include whether the storage group has been masked to a host device, an amount of time since IO activity has occurred on the storage group, an amount of time since local protection was implemented on the storage group, and an amount of time since remote protection was implemented on the storage group.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: September 27, 2022
    Assignee: Dell Products, L.P.
    Inventors: Finbarr O'Riordan, Tim O'Connor, Warren Fleury
  • Patent number: 11455256
    Abstract: A memory system is connectable to the host. The memory system includes a nonvolatile first memory, a second memory in which a plurality of pieces of first information each correlating a logical address indicating a location in a logical address space of the memory system with a physical address indicating a location in the first memory are stored, a volatile third memory including a first cache and a second cache, a compressor configured to perform compression on the plurality of pieces of first information, and a memory controller. The memory controller stores the first information not compressed by the compressor in the first cache, stores the first information compressed by the compressor in the second cache, and controls a ratio between a first capacity, which is a capacity of the first cache, and a second capacity, which is a capacity of the second cache.
    Type: Grant
    Filed: March 2, 2020
    Date of Patent: September 27, 2022
    Assignee: KIOXIA CORPORATION
    Inventors: Tomonori Yokoyama, Mitsunori Tadokoro, Satoshi Kaburaki
  • Patent number: 11455003
    Abstract: A computational device receives an input/output (I/O) operation directed to a data set. In response to determining that there is a time lock on the data set, a determination is made as to whether a clock of the computational device is providing a correct time. In response to determining that the clock of the computational device is not providing the correct time, the I/O operation is restricted from accessing the data set. In response to determining that the clock of the computational device is providing the correct time, a determination is made from one or more time entries of the time lock whether to provide the I/O operation with access to the data set.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: September 27, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthew G. Borlick, Lokesh M. Gupta
  • Patent number: 11455254
    Abstract: A flash memory system and a flash memory thereof are provided. The flash memory device includes a NAND flash memory and a control circuit. The NAND flash memory chip includes a cache memory, a page buffer; and an NAND flash memory array. The NAND flash memory array includes a plurality of pages, wherein each page includes a plurality of sub-pages, each sub-page has a sub-page length. The cache memory is composed of a plurality of sub cache and each sub cache corresponds to different pages of the NAND flash memory array. The page buffer is composed of a plurality of sub-page buffers and each sub-page buffer corresponds to different pages of the NAND flash memory array. The control circuit is coupled to the host and the NAND flash memory, and performs an access operation in units of one sub-page.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: September 27, 2022
    Assignee: MACRONIX INTERNATIONAL CO., LTD.
    Inventors: Chun-Lien Su, Chun-Hsiung Hung, Shuo-Nan Hung
  • Patent number: 11422725
    Abstract: A method of storing a set of data representing a point cloud, comprising: creating an array in a digital memory having cells addressable by reference to at least one index, wherein each of the at least one indices has a predetermined correspondence to a geometric location within the point cloud; and storing a value of the data set in each of the cells.
    Type: Grant
    Filed: July 25, 2017
    Date of Patent: August 23, 2022
    Assignee: GENERAL ELECTRIC COMPANY
    Inventor: Justin Mamrak
  • Patent number: 11422932
    Abstract: Managing secondary objects efficiently increases garbage collection concurrency and reduces object storage requirements. Aliveness marking of secondary objects is integrated with aliveness marking of referenced objects. Allocation of reference-sized secondary object identifier fields in objects which are not primary objects is avoided; a dedicated bit specifies primary objects, together with an object relationship table. A primary object is one with at least one secondary object which is deemed alive by garbage collection if the primary object is alive, without being a referenced object of the primary object. Any referenced objects of the alive primary object will also still be deemed alive. Code paths for marking referenced objects can be shared to allow more efficient secondary object marking. Primary-secondary object relationships may be represented in dependent handles, and may be specified in a hash table or other data structure.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 23, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Maoni Zhang Stephens, Patrick Henri Dussud
  • Patent number: 11422742
    Abstract: Methods of memory allocation map registers referenced by different groups of instances of the same task to individual logical memories. Other example methods describe the mapping of registers referenced by a task to different banks within a single logical memory and in various examples this mapping may take into consideration which bank is likely to be the dominant bank for the particular task and the allocation for one or more other tasks.
    Type: Grant
    Filed: February 14, 2020
    Date of Patent: August 23, 2022
    Assignee: Imagination Technologies Limited
    Inventors: Isuru Herath, Richard Broadhurst
  • Patent number: 11422719
    Abstract: A method for distributed file deletion or truncation, performed by a storage system, is provided. The method includes determining, by an authority owning an inode of a file, which authorities own data portions to be deleted, responsive to a request for the file deletion or truncation. The method includes recording, by the authority owning the inode, the file deletion or truncation in a first memory, and deleting, in background by the authorities that own the data portions to be deleted, the data portions in one of a first memory or a second memory. A system and computer readable media are also provided.
    Type: Grant
    Filed: December 14, 2016
    Date of Patent: August 23, 2022
    Assignee: Pure Storage, Inc.
    Inventors: Robert Lee, Igor Ostrovsky, Shuyi Shao, Peter Vajgel
  • Patent number: 11416411
    Abstract: Methods and apparatus relating to predictive page fault handling. In an example, an apparatus comprises a processor to receive a virtual address that triggered a page fault for a compute process, check a virtual memory space for a virtual memory allocation for the compute process that triggered the page fault and manage the page fault according to one of a first protocol in response to a determination that the virtual address that triggered the page fault is a last page in the virtual memory allocation for the compute process, or a second protocol in response to a determination that the virtual address that triggered the page fault is not a last page in the virtual memory allocation for the compute process. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: August 16, 2022
    Assignee: INTEL CORPORATION
    Inventors: Murali Ramadoss, Vikranth Vemulapalli, Niran Cooray, William B. Sadler, Jonathan D. Pearce, Marian Alin Petre, Ben Ashbaugh, Elmoustapha Ould-Ahmed-Vall, Nicolas Galoppo Von Borries, Altug Koker, Aravindh Anantaraman, Subramaniam Maiyuran, Varghese George, Sungye Kim, Valentin Andrei
  • Patent number: 11403020
    Abstract: In some examples, a system performs data deduplication using a fingerprint index comprising a plurality of buckets, each bucket of the plurality of buckets comprising entries associating fingerprints for data units to storage location indicators of the data units, wherein a storage location indicator of the storage location indicators provides an indication of a storage location of a data unit in persistent storage. For adding a new fingerprint to the fingerprint index, the system detects that a corresponding bucket of the plurality of buckets is full, in response to the detecting, adds space to the corresponding bucket by taking a respective amount of space from a further bucket of the plurality of buckets, and inserts the new fingerprint into the corresponding bucket after increasing the size of the corresponding bucket.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: August 2, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Sudhanshu Goswami, Sonam Mandal
  • Patent number: 11403242
    Abstract: The present invention provides a control method of multiple memory devices, wherein the multiple devices comprise a first memory device and a second memory device, and the control method includes the steps of: determining a first operation timing and a second operation timing according to at least a first command signal that a first memory controller needs to send to the first memory device; controlling the first memory controller to send the first command signal to the first memory device at the first operation timing; and controlling the second memory controller to send the second command signal to the second memory device at the second operation timing.
    Type: Grant
    Filed: February 7, 2021
    Date of Patent: August 2, 2022
    Assignee: Realtek Semiconductor Corp.
    Inventors: Ching-Sheng Cheng, Wen-Wei Lin, Kuan-Chia Huang
  • Patent number: 11397683
    Abstract: Systems and methods are disclosed including a first memory device, a second memory device coupled to the first memory device, where the second memory device has a lower access latency than the first memory device and acts as a cache for the first memory device. A processing device operatively coupled to the first and second memory devices can track access statistics of segments of data stored at the second memory device, the segments having a first granularity, and determine to update, based on the access statistics, a segment of data stored at the second memory device from the first granularity to a second granularity. The processing device can further retrieve additional data associated with the segment of data from the first memory device and store the additional data at the second memory device to form a new segment having the second granularity.
    Type: Grant
    Filed: August 26, 2020
    Date of Patent: July 26, 2022
    Assignee: MICRON TECHNOLOGY, INC.
    Inventors: Horia C. Simionescu, Paul Stonelake, Chung Kuang Chin, Narasimhulu Dharanikumar Kotte, Robert M. Walker, Cagdas Dirik
  • Patent number: 11392494
    Abstract: Technologies for column reads for clustered data include a device having a column-addressable memory and circuitry connected to the memory. The column-addressable memory includes multiple dies. The circuitry may be configured to determine multiple die offsets based on a logical column number of the data cluster, determine a base address based on the logical column number, program the dies with the die offsets. The circuitry is further to read logical column data from the column-addressable memory. To read the data, each die adds the corresponding die offset to the base address. The column-addressable memory may include multiple command/address buses. The circuitry may determine a starting address for each of multiple logical columns and issue a column read for each starting address via a corresponding command/address bus. Other embodiments are described and claimed.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: July 19, 2022
    Assignee: Intel Corporation
    Inventors: Jawad Khan, Chetan Chauhan, Rajesh Sundaram, Sourabh Dongaonkar, Sandeep Guliani, Dipanjan Sengupta, Mariano Tepper
  • Patent number: 11386946
    Abstract: Apparatuses and methods for tracking all row accesses in a memory device over time may be used to identify rows which are being hammered so that ‘victim’ rows may be identified and refreshed. A register stack may include a number of count values, each of which may track a number of accesses to a portion of the word lines of the memory device. Anytime a row within a given portion is accessed, the associated count value may be incremented. When a count value exceeds a first threshold, a second stack with a second number of count values may be used to track numbers of accesses to sub-portions of the given portion. When a second count value exceeds a second threshold, victim addresses may be provided to refresh the victim word lines associated with any of the word lines within the sub-portion.
    Type: Grant
    Filed: July 16, 2019
    Date of Patent: July 12, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Sujeet Ayyapureddi, Donald M. Morgan
  • Patent number: 11379357
    Abstract: The present disclosure relates to a storage device and a method of operating the same. The storage device includes a memory device including a memory cell array that stores normal data and map data, and a memory controller configured to control overall operation, including program operation, read operation, and erase operation, of the memory device in response to requests from a host. The memory device is configured to, during a map data load operation, transmit first map data to the memory controller by reading the first map data among the map data stored in the memory cell array, and transmit second map data to a page buffer group of the memory device by reading the second map data among the map data.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: July 5, 2022
    Assignee: SK hynix Inc.
    Inventor: Byoung Sung You
  • Patent number: 11360898
    Abstract: This technology relates to a method and apparatus for improving I/O throughput through an interleaving operation for multiple memory dies of a memory system. A memory system may include: multiple memory dies suitable for outputting data of different sizes in response to a read request; and a controller in communication with the multiple memory dies through multiple channels, and suitable for: performing a correlation operation on the read request so that the multiple memory dies interleave and output target data corresponding to the read request through the multiple channels, determining a pending credit using a result of the correlation operation, and reading, from the multiple memory dies, the target data corresponding to the read request and additional data stored in a same storage unit as the target data, based on a type of the target data corresponding to the read request and the pending credit.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: June 14, 2022
    Assignee: SK hynix Inc.
    Inventor: Jeen Park
  • Patent number: 11360702
    Abstract: The examples include methods and apparatuses to store events in a queue for an EC, Storing events in a queue for an EC can include receiving a message from a core FW of an EC and identifying an event corresponding to the message. Storing events in a queue for an EC can also include accessing a priority associated with the event and adding the event and the priority to a queue to be processed by the EC.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: June 14, 2022
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Stanley Hyojun Park