Patents Examined by William E Baughman
  • Patent number: 11442651
    Abstract: Techniques rebuild data in a storage array group. Such techniques involve: in response to determining that a first storage device of a plurality of storage devices comprised in the storage array group is in a non-working state, generating a write record of the first storage device, the write record indicating whether a write operation occurs for each of a plurality of storage areas in the first storage device during the non-working state; in response to determining that the first storage device returns from the non-working state to a working state, determining, based on the write record, whether a target storage area in need of execution of data rebuilding is present in the first storage device; and controlling, based on the determining, the data rebuilding to be executed on the target storage area.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: September 13, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Lei Sun, Jian Gao, Hongpo Gao
  • Patent number: 11442865
    Abstract: A method of prefetching memory pages from remote memory includes detecting that a cache-line access made by a processor executing an application program is an access to a cache line containing page table data of the application program, identifying data pages that are referenced by the page table data, initiating a fetch of a data page, which is one of the identified data pages, and starting a timer. If the fetch completes prior to expiration of the timer, the data page is stored in a local memory. On the other hand, if the fetch does not complete prior to expiration of timer, a presence bit of the data page in the page table data is set to indicate that the data page is not present.
    Type: Grant
    Filed: July 2, 2021
    Date of Patent: September 13, 2022
    Assignee: VMware, Inc.
    Inventors: Irina Calciu, Andreas Nowatzyk, Isam Wadih Akkawi, Venkata Subhash Reddy Peddamallu, Pratap Subrahmanyam
  • Patent number: 11435951
    Abstract: A memory controller is able to issue a first write command for writing data of a predetermined length into a DRAM and a second write command for writing data which is less than the predetermined length in the DRAM. The memory controller includes a deciding unit configured to decide an issuance order of one or more requests stored in a storage unit. In a period from the issuance of a preceding DRAM command until a second write command targeting the same bank as the preceding DRAM command is issued, if another DRAM command targeting a bank different from the bank targeted by the preceding DRAM command can be issued, the deciding unit will decide the issuance order so that the other DRAM command that can be issued will be issued before the second write command.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: September 6, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventors: Motohisa Ito, Daisuke Shiraishi
  • Patent number: 11436145
    Abstract: A computer directs activity within a computer storage subsystem. The computer identifies a computer operating environment including a computer, and a storage subsystem connected to a group of storage devices. The compute receives metadata representing current and historic performance metrics of said computer operating environment. The computer identifies a first device associated with a first behavior profile governed by a power law distribution, and a second device associated with a second behavior profile governed by a normal distribution. The computer trains Machine Learning (ML) models based on the behavior profiles. The computer establishes Device Performance Rules based on the ML models. The computer forecasts time-based storage system requirements based, at least in part on the Device Performance Rules. The computer prefetches data to a cache component based, at least in part on said forecasted system requirements, in accordance with a time reference available to said computer.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: September 6, 2022
    Assignee: KYNDRYL, INC.
    Inventors: Anil Kumar Narigapalli, Laxmikantha Sai Nanduru, Clea Zolotow, Gavin Charles O'Reilly, Venkateswarlu Basyam
  • Patent number: 11438432
    Abstract: A machine-implemented method for controlling transfer of at least one data item from a data cache component, in communication with storage using at least one relatively higher-latency path and at least one relatively lower-latency path, comprises: receiving metadata defining at least a first characteristic of data selected for inspection; responsive to the metadata, seeking a match between said at least first characteristic and a second characteristic of at least one of a plurality of data items in the data cache component; selecting said at least one of the plurality of data items where the at least one of the plurality of data items has the second characteristic matching the first characteristic; and passing the selected one of the plurality of data items from the data cache component using the relatively lower-latency path.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: September 6, 2022
    Assignee: METASWITCH NETWORKS LTD
    Inventors: Jim Wilkinson, Jonathan Lawn
  • Patent number: 11429529
    Abstract: An apparatus comprises processing circuitry to issue demand memory access requests to access data stored in a memory system. Stride pattern detection circuitry detects whether a sequence of demand target addresses specified by the demand memory access requests includes two or more constant stride sequences of addresses interleaved within the sequence of demand target addresses. Each constant stride sequence comprises addresses separated by intervals of a constant stride value. Prefetch control circuitry controls issuing of prefetch load requests to prefetch data from the memory system. The prefetch load requests specify prefetch target addresses predicted based on the constant stride sequences detected by the stride pattern detection circuitry.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: August 30, 2022
    Assignee: Arm Limited
    Inventors: Alexander Alfred Hornung, Jose Gonzalez-Gonzalez, Gregory Andrew Chadwick
  • Patent number: 11429524
    Abstract: Various embodiments are provided for optimized placement of data structures in a hierarchy of memory in a computing environment. One or more data structures may be placed in a first scratchpad memory, a second scratchpad memory, an external memory, or a combination thereof in the hierarchy of memory according to a total memory capacity and bandwidth, a level of reuse of the one or more data structures, a number of operations that use each of the one or more data structures, a required duration each the one or more data structures are required to be placed a first scratchpad or a second scratchpad, and characteristics of those of the one or more data structures competing for placement in the hierarchy of memory that are able to co-exist at a same time step. The second scratchpad memory is positioned between the external memory and the first scratchpad memory at one or more intermediary layers.
    Type: Grant
    Filed: February 10, 2020
    Date of Patent: August 30, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Arvind Kumar, Swagath Venkataramani, Ching-Tzu Chen
  • Patent number: 11422707
    Abstract: Systems, apparatuses, and methods for performing efficient memory accesses for a computing system are disclosed. A computing system includes one or more clients for processing applications. A memory controller transfers traffic between the memory controller and two channels, each connected to a memory device. A client sends a 64-byte memory request with an indication specifying that there are two 32-byte requests targeting non-contiguous data within a same page. The memory controller generates two addresses, and sends a single command and the two addresses to two channels to simultaneously access non-contiguous data in a same page.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: August 23, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventor: James Raymond Magro
  • Patent number: 11422715
    Abstract: Direct read in clustered file systems is described herein. A method as described herein can include determining, for a write operation on a resource stored by a data storage system, as initiated by an initiator node, a reference count for the resource, the reference count comprising a number of target storage regions of the data storage system to be modified by write data during the write operation; facilitating conveying, from the initiator node to a lock coordinator node, the reference count for the resource; facilitating conveying, from the initiator node to respective participant nodes that are respectively assigned to the target storage regions, the write data and a key value for the write operation; and facilitating causing the respective participant nodes to convey respective notifications that comprise the key value in response to the respective participant nodes writing the write data to the target storage regions.
    Type: Grant
    Filed: April 21, 2021
    Date of Patent: August 23, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Jonathan Walton, Max Laier, Suraj Raju, Cornelis van Rij
  • Patent number: 11409648
    Abstract: An electronic apparatus is provided. The electronic apparatus according to an embodiment includes a memory configured to store computer executable instructions, and a processor configured to, by executing the computer executable instructions, based on a request for executing a program being received and an available capacity of a first area of the memory to be allocated to the program being insufficient, swap-out page data stored in the first area to a second area of the memory, wherein the processor is further configured to swap out the page data partially or entirely based on an attribute of the page data.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: August 9, 2022
    Assignees: SAMSUNG ELECTRONICS CO., LTD., RESEARCH & BUSINESS FOUNDATION SUNGKYUNKWAN UNIVERSITY
    Inventors: Youngho Choi, Young Ik Eom, Jaeook Kwon
  • Patent number: 11403212
    Abstract: The disclosure provides an approach for implementing a deduplicated (DD) assisted caching policy for a content based read cache (CBRC). Embodiments include receiving a first input/output (I/O) to write first data in storage as associated with a first logical block address (LBA); when the first data is located in a CBRC or in a DD cache located in memory, incrementing a first deduplication counter associated with the first data; when the first data is located in neither the CBRC nor the DD cache, creating the first deduplication counter; when the first deduplication counter meets a threshold after incrementing, and the first data is not located in the DD cache, adding the first data to the DD cache; and writing the first data to the storage as associated with the first LBA.
    Type: Grant
    Filed: May 5, 2021
    Date of Patent: August 2, 2022
    Assignee: VMware, Inc.
    Inventors: Zubraj Singha, Kashish Bhatia, Tanay Ganguly, Goresh Musalay
  • Patent number: 11397668
    Abstract: In a data read/write method, a storage server receives a write request of a client and performs storage. Each write request carries a to-be-written slice, an ID of a first storage device, and a virtual storage address of a first virtual storage block. If storage is performed continuously successfully from a start address within virtual storage space of a virtual storage block in the storage device, a successful continuous storage address range is recorded. For each storage device, all data within the successful continuous storage address range is successfully stored data. When receiving a read request of a client for an address segment within the address range, the storage server may directly return data that needs to be read to the client.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: July 26, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Tangren Yao, Chen Wang, Feng Wang, Wei Feng
  • Patent number: 11392510
    Abstract: A management method of cache files in storage space, adapted to a storage space storing a plurality of cache files, the management method comprises: forming a cache file status list which records a plurality of file names and a plurality of file status; determining whether a storage condition of the storage space is in a healthy condition; assigning a plurality of corresponding tags to the plurality of file status when the storage condition is not in the healthy condition, and forming a sorted cache file list; and deleting the last file name from the sorted cache file list and the cache file from the storage space corresponding to the file name, wherein the sorted cache file list records the file names which are sorted from a file name of a cache file that should be kept most to another file name of another cache file that should be deleted most.
    Type: Grant
    Filed: August 19, 2020
    Date of Patent: July 19, 2022
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Ching-Hsiang Wen, Sheng-An Chang
  • Patent number: 11379380
    Abstract: A method of managing load units of executable instructions between internal memory in a microcontroller with multiple bus masters, and a non-volatile memory device external to the microcontroller. A copy of the load units are loaded from the external memory device into the internal memory for use by corresponding bus masters. Each load unit is with a corresponding load entity queue and each load entity queue is associated with a corresponding one of the multiple bus masters. Each load entity queue selects an eviction candidate from the associated copy of the load units currently loaded in the internal memory. Information identifying the eviction candidate for each load entity queue is broadcasted to all load entity queues. The eviction candidate is added to a set of managed eviction candidates if none of the load entity queues vetoes using the eviction candidate.
    Type: Grant
    Filed: May 7, 2020
    Date of Patent: July 5, 2022
    Assignee: NXP USA, Inc.
    Inventors: Michael Rohleder, Cristian Macario, Marcus Mueller
  • Patent number: 11379379
    Abstract: Described is a computing system and method for differential cache block sizing for computing systems. The method for differential cache block sizing includes determining, upon a cache miss at a cache, a number of available cache blocks given a payload length of the main memory and a cache block size for the last level cache, generating a main memory request including at least one indicator for a missed cache block and any available cache blocks, sending the main memory request to the main memory to obtain data associated with the missed cache block and each of the any available cache blocks, storing the data received for the missed cache block in the cache; and storing the data received for each of the any available cache blocks in the cache depending on a cache replacement algorithm.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: July 5, 2022
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Shubhendu Mukherjee, David Asher, Thomas F. Hummel
  • Patent number: 11372758
    Abstract: Embodiments of a system for dynamic reconfiguration of cache are disclosed. Accordingly, the system includes a plurality of processors and a plurality of memory modules executed by the plurality of processors. The system also includes a dynamic reconfigurable cache comprising of a multi-level cache implementing a combination of an L1 cache, an L2 cache, and an L3 cache. The one or more of the L1 cache, the L2 cache, and the L3 cache are dynamically reconfigurable to one or more sizes based at least in part on an application data size associated with an application being executed by the plurality of processors. In an embodiment, the system includes a reconfiguration control and distribution module configured to perform dynamic reconfiguration of the dynamic reconfigurable cache based on the application data size.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: June 28, 2022
    Assignee: Jackson State University
    Inventors: Khalid Abed, Tirumale Ramesh
  • Patent number: 11372546
    Abstract: A technique for transferring data in a digital signal processing system is described. In one example, the digital signal processing system comprises a number of fixed function accelerators, each connected to a memory access controller and each configured to read data from a memory device, perform one or more operations on the data, and write data to the memory device. To avoid hardwiring the fixed function accelerators together, and to provide a configurable digital signal processing system, a multi-threaded processor controls the transfer of data between the fixed function accelerators and the memory. Each processor thread is allocated to a memory access channel, and the threads are configured to detect an occurrence of an event and, responsive to this, control the memory access controller to enable a selected fixed function accelerator to read data from or write data to the memory device via its memory access channel.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: June 28, 2022
    Assignee: Nordic Semiconductor ASA
    Inventors: Adrian J. Anderson, Gary C. Wass, Gareth J. Davies
  • Patent number: 11372770
    Abstract: Methods for determining cache activity and for optimizing cache reclamation are performed by systems and devices. A cache entry access is determined at an access time, and a data object of the cache entry for a current time window is identified that includes a time stamp for a previous access and a counter index. A conditional counter operation is then performed on the counter associated with the index to increment the counter when the time stamp is outside the time window or to maintain the counter when the time stamp is within the time window. A counter index that identifies another counter for a previous time window where the other counter value was incremented for the previous cache entry access causes the other counter to be decremented. A cache configuration command to reclaim, or additionally allocate space to, the cache is generated based on the values of the counters.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: June 28, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Junfeng Dong, Ajay Kalhan, Manoj A. Syamala, Vivek R. Narasayya, Changsong Li, Shize Xu, Pankaj Arora, John M. Oslake, Arnd Christian König, Jiaqi Liu
  • Patent number: 11360669
    Abstract: The storage device includes a first memory, a process device that stores data in the first memory and reads the data from the first memory, and an accelerator that includes a second memory different from the first memory. The accelerator stores compressed data stored in one or more storage drives storing data, in the second memory, decompresses the compressed data stored in the second memory to generate plaintext data, extracts data designated in the process device from the plaintext data, and transmits the extracted designated data to the first memory.
    Type: Grant
    Filed: February 10, 2021
    Date of Patent: June 14, 2022
    Assignee: HITACHI, LTD.
    Inventors: Masahiro Tsuruya, Nagamasa Mizushima, Tomohiro Yoshihara, Kentaro Shimada
  • Patent number: 11354038
    Abstract: Aspects of the present disclosure provide a computer-implemented method that includes providing a layered index to variable length data, the layered index comprising a plurality of layers. Each layer of the plurality of layers has an index array, a block offset array, and a per-block size array. The index array identifies a next level index of a plurality of indices or data. The indices represent a delta value from a first index of a block. The block offset array identifies a starting location of the index array. The per-block array identifies a shared integer size of a block of indices. The method further includes performing a random access read of the variable length data using the layered index.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: June 7, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jinho Lee, Frank Liu