Patents Examined by William E Baughman
-
Patent number: 11442651Abstract: Techniques rebuild data in a storage array group. Such techniques involve: in response to determining that a first storage device of a plurality of storage devices comprised in the storage array group is in a non-working state, generating a write record of the first storage device, the write record indicating whether a write operation occurs for each of a plurality of storage areas in the first storage device during the non-working state; in response to determining that the first storage device returns from the non-working state to a working state, determining, based on the write record, whether a target storage area in need of execution of data rebuilding is present in the first storage device; and controlling, based on the determining, the data rebuilding to be executed on the target storage area.Type: GrantFiled: October 17, 2019Date of Patent: September 13, 2022Assignee: EMC IP Holding Company LLCInventors: Lei Sun, Jian Gao, Hongpo Gao
-
Patent number: 11442865Abstract: A method of prefetching memory pages from remote memory includes detecting that a cache-line access made by a processor executing an application program is an access to a cache line containing page table data of the application program, identifying data pages that are referenced by the page table data, initiating a fetch of a data page, which is one of the identified data pages, and starting a timer. If the fetch completes prior to expiration of the timer, the data page is stored in a local memory. On the other hand, if the fetch does not complete prior to expiration of timer, a presence bit of the data page in the page table data is set to indicate that the data page is not present.Type: GrantFiled: July 2, 2021Date of Patent: September 13, 2022Assignee: VMware, Inc.Inventors: Irina Calciu, Andreas Nowatzyk, Isam Wadih Akkawi, Venkata Subhash Reddy Peddamallu, Pratap Subrahmanyam
-
Patent number: 11435951Abstract: A memory controller is able to issue a first write command for writing data of a predetermined length into a DRAM and a second write command for writing data which is less than the predetermined length in the DRAM. The memory controller includes a deciding unit configured to decide an issuance order of one or more requests stored in a storage unit. In a period from the issuance of a preceding DRAM command until a second write command targeting the same bank as the preceding DRAM command is issued, if another DRAM command targeting a bank different from the bank targeted by the preceding DRAM command can be issued, the deciding unit will decide the issuance order so that the other DRAM command that can be issued will be issued before the second write command.Type: GrantFiled: August 27, 2020Date of Patent: September 6, 2022Assignee: Canon Kabushiki KaishaInventors: Motohisa Ito, Daisuke Shiraishi
-
Patent number: 11436145Abstract: A computer directs activity within a computer storage subsystem. The computer identifies a computer operating environment including a computer, and a storage subsystem connected to a group of storage devices. The compute receives metadata representing current and historic performance metrics of said computer operating environment. The computer identifies a first device associated with a first behavior profile governed by a power law distribution, and a second device associated with a second behavior profile governed by a normal distribution. The computer trains Machine Learning (ML) models based on the behavior profiles. The computer establishes Device Performance Rules based on the ML models. The computer forecasts time-based storage system requirements based, at least in part on the Device Performance Rules. The computer prefetches data to a cache component based, at least in part on said forecasted system requirements, in accordance with a time reference available to said computer.Type: GrantFiled: March 30, 2021Date of Patent: September 6, 2022Assignee: KYNDRYL, INC.Inventors: Anil Kumar Narigapalli, Laxmikantha Sai Nanduru, Clea Zolotow, Gavin Charles O'Reilly, Venkateswarlu Basyam
-
Patent number: 11438432Abstract: A machine-implemented method for controlling transfer of at least one data item from a data cache component, in communication with storage using at least one relatively higher-latency path and at least one relatively lower-latency path, comprises: receiving metadata defining at least a first characteristic of data selected for inspection; responsive to the metadata, seeking a match between said at least first characteristic and a second characteristic of at least one of a plurality of data items in the data cache component; selecting said at least one of the plurality of data items where the at least one of the plurality of data items has the second characteristic matching the first characteristic; and passing the selected one of the plurality of data items from the data cache component using the relatively lower-latency path.Type: GrantFiled: June 7, 2021Date of Patent: September 6, 2022Assignee: METASWITCH NETWORKS LTDInventors: Jim Wilkinson, Jonathan Lawn
-
Patent number: 11429529Abstract: An apparatus comprises processing circuitry to issue demand memory access requests to access data stored in a memory system. Stride pattern detection circuitry detects whether a sequence of demand target addresses specified by the demand memory access requests includes two or more constant stride sequences of addresses interleaved within the sequence of demand target addresses. Each constant stride sequence comprises addresses separated by intervals of a constant stride value. Prefetch control circuitry controls issuing of prefetch load requests to prefetch data from the memory system. The prefetch load requests specify prefetch target addresses predicted based on the constant stride sequences detected by the stride pattern detection circuitry.Type: GrantFiled: November 21, 2019Date of Patent: August 30, 2022Assignee: Arm LimitedInventors: Alexander Alfred Hornung, Jose Gonzalez-Gonzalez, Gregory Andrew Chadwick
-
Optimized hierarchical scratchpads for enhanced artificial intelligence accelerator core utilization
Patent number: 11429524Abstract: Various embodiments are provided for optimized placement of data structures in a hierarchy of memory in a computing environment. One or more data structures may be placed in a first scratchpad memory, a second scratchpad memory, an external memory, or a combination thereof in the hierarchy of memory according to a total memory capacity and bandwidth, a level of reuse of the one or more data structures, a number of operations that use each of the one or more data structures, a required duration each the one or more data structures are required to be placed a first scratchpad or a second scratchpad, and characteristics of those of the one or more data structures competing for placement in the hierarchy of memory that are able to co-exist at a same time step. The second scratchpad memory is positioned between the external memory and the first scratchpad memory at one or more intermediary layers.Type: GrantFiled: February 10, 2020Date of Patent: August 30, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Arvind Kumar, Swagath Venkataramani, Ching-Tzu Chen -
Patent number: 11422707Abstract: Systems, apparatuses, and methods for performing efficient memory accesses for a computing system are disclosed. A computing system includes one or more clients for processing applications. A memory controller transfers traffic between the memory controller and two channels, each connected to a memory device. A client sends a 64-byte memory request with an indication specifying that there are two 32-byte requests targeting non-contiguous data within a same page. The memory controller generates two addresses, and sends a single command and the two addresses to two channels to simultaneously access non-contiguous data in a same page.Type: GrantFiled: December 21, 2017Date of Patent: August 23, 2022Assignee: Advanced Micro Devices, Inc.Inventor: James Raymond Magro
-
Patent number: 11422715Abstract: Direct read in clustered file systems is described herein. A method as described herein can include determining, for a write operation on a resource stored by a data storage system, as initiated by an initiator node, a reference count for the resource, the reference count comprising a number of target storage regions of the data storage system to be modified by write data during the write operation; facilitating conveying, from the initiator node to a lock coordinator node, the reference count for the resource; facilitating conveying, from the initiator node to respective participant nodes that are respectively assigned to the target storage regions, the write data and a key value for the write operation; and facilitating causing the respective participant nodes to convey respective notifications that comprise the key value in response to the respective participant nodes writing the write data to the target storage regions.Type: GrantFiled: April 21, 2021Date of Patent: August 23, 2022Assignee: EMC IP Holding Company LLCInventors: Jonathan Walton, Max Laier, Suraj Raju, Cornelis van Rij
-
Patent number: 11409648Abstract: An electronic apparatus is provided. The electronic apparatus according to an embodiment includes a memory configured to store computer executable instructions, and a processor configured to, by executing the computer executable instructions, based on a request for executing a program being received and an available capacity of a first area of the memory to be allocated to the program being insufficient, swap-out page data stored in the first area to a second area of the memory, wherein the processor is further configured to swap out the page data partially or entirely based on an attribute of the page data.Type: GrantFiled: December 20, 2018Date of Patent: August 9, 2022Assignees: SAMSUNG ELECTRONICS CO., LTD., RESEARCH & BUSINESS FOUNDATION SUNGKYUNKWAN UNIVERSITYInventors: Youngho Choi, Young Ik Eom, Jaeook Kwon
-
Patent number: 11403212Abstract: The disclosure provides an approach for implementing a deduplicated (DD) assisted caching policy for a content based read cache (CBRC). Embodiments include receiving a first input/output (I/O) to write first data in storage as associated with a first logical block address (LBA); when the first data is located in a CBRC or in a DD cache located in memory, incrementing a first deduplication counter associated with the first data; when the first data is located in neither the CBRC nor the DD cache, creating the first deduplication counter; when the first deduplication counter meets a threshold after incrementing, and the first data is not located in the DD cache, adding the first data to the DD cache; and writing the first data to the storage as associated with the first LBA.Type: GrantFiled: May 5, 2021Date of Patent: August 2, 2022Assignee: VMware, Inc.Inventors: Zubraj Singha, Kashish Bhatia, Tanay Ganguly, Goresh Musalay
-
Patent number: 11397668Abstract: In a data read/write method, a storage server receives a write request of a client and performs storage. Each write request carries a to-be-written slice, an ID of a first storage device, and a virtual storage address of a first virtual storage block. If storage is performed continuously successfully from a start address within virtual storage space of a virtual storage block in the storage device, a successful continuous storage address range is recorded. For each storage device, all data within the successful continuous storage address range is successfully stored data. When receiving a read request of a client for an address segment within the address range, the storage server may directly return data that needs to be read to the client.Type: GrantFiled: April 23, 2020Date of Patent: July 26, 2022Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Tangren Yao, Chen Wang, Feng Wang, Wei Feng
-
Patent number: 11392510Abstract: A management method of cache files in storage space, adapted to a storage space storing a plurality of cache files, the management method comprises: forming a cache file status list which records a plurality of file names and a plurality of file status; determining whether a storage condition of the storage space is in a healthy condition; assigning a plurality of corresponding tags to the plurality of file status when the storage condition is not in the healthy condition, and forming a sorted cache file list; and deleting the last file name from the sorted cache file list and the cache file from the storage space corresponding to the file name, wherein the sorted cache file list records the file names which are sorted from a file name of a cache file that should be kept most to another file name of another cache file that should be deleted most.Type: GrantFiled: August 19, 2020Date of Patent: July 19, 2022Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Ching-Hsiang Wen, Sheng-An Chang
-
Patent number: 11379380Abstract: A method of managing load units of executable instructions between internal memory in a microcontroller with multiple bus masters, and a non-volatile memory device external to the microcontroller. A copy of the load units are loaded from the external memory device into the internal memory for use by corresponding bus masters. Each load unit is with a corresponding load entity queue and each load entity queue is associated with a corresponding one of the multiple bus masters. Each load entity queue selects an eviction candidate from the associated copy of the load units currently loaded in the internal memory. Information identifying the eviction candidate for each load entity queue is broadcasted to all load entity queues. The eviction candidate is added to a set of managed eviction candidates if none of the load entity queues vetoes using the eviction candidate.Type: GrantFiled: May 7, 2020Date of Patent: July 5, 2022Assignee: NXP USA, Inc.Inventors: Michael Rohleder, Cristian Macario, Marcus Mueller
-
Patent number: 11379379Abstract: Described is a computing system and method for differential cache block sizing for computing systems. The method for differential cache block sizing includes determining, upon a cache miss at a cache, a number of available cache blocks given a payload length of the main memory and a cache block size for the last level cache, generating a main memory request including at least one indicator for a missed cache block and any available cache blocks, sending the main memory request to the main memory to obtain data associated with the missed cache block and each of the any available cache blocks, storing the data received for the missed cache block in the cache; and storing the data received for each of the any available cache blocks in the cache depending on a cache replacement algorithm.Type: GrantFiled: April 30, 2020Date of Patent: July 5, 2022Assignee: Marvell Asia Pte, Ltd.Inventors: Shubhendu Mukherjee, David Asher, Thomas F. Hummel
-
Dynamic reconfigurable multi-level cache for multi-purpose and heterogeneous computing architectures
Patent number: 11372758Abstract: Embodiments of a system for dynamic reconfiguration of cache are disclosed. Accordingly, the system includes a plurality of processors and a plurality of memory modules executed by the plurality of processors. The system also includes a dynamic reconfigurable cache comprising of a multi-level cache implementing a combination of an L1 cache, an L2 cache, and an L3 cache. The one or more of the L1 cache, the L2 cache, and the L3 cache are dynamically reconfigurable to one or more sizes based at least in part on an application data size associated with an application being executed by the plurality of processors. In an embodiment, the system includes a reconfiguration control and distribution module configured to perform dynamic reconfiguration of the dynamic reconfigurable cache based on the application data size.Type: GrantFiled: May 12, 2020Date of Patent: June 28, 2022Assignee: Jackson State UniversityInventors: Khalid Abed, Tirumale Ramesh -
Patent number: 11372546Abstract: A technique for transferring data in a digital signal processing system is described. In one example, the digital signal processing system comprises a number of fixed function accelerators, each connected to a memory access controller and each configured to read data from a memory device, perform one or more operations on the data, and write data to the memory device. To avoid hardwiring the fixed function accelerators together, and to provide a configurable digital signal processing system, a multi-threaded processor controls the transfer of data between the fixed function accelerators and the memory. Each processor thread is allocated to a memory access channel, and the threads are configured to detect an occurrence of an event and, responsive to this, control the memory access controller to enable a selected fixed function accelerator to read data from or write data to the memory device via its memory access channel.Type: GrantFiled: March 25, 2019Date of Patent: June 28, 2022Assignee: Nordic Semiconductor ASAInventors: Adrian J. Anderson, Gary C. Wass, Gareth J. Davies
-
Patent number: 11372770Abstract: Methods for determining cache activity and for optimizing cache reclamation are performed by systems and devices. A cache entry access is determined at an access time, and a data object of the cache entry for a current time window is identified that includes a time stamp for a previous access and a counter index. A conditional counter operation is then performed on the counter associated with the index to increment the counter when the time stamp is outside the time window or to maintain the counter when the time stamp is within the time window. A counter index that identifies another counter for a previous time window where the other counter value was incremented for the previous cache entry access causes the other counter to be decremented. A cache configuration command to reclaim, or additionally allocate space to, the cache is generated based on the values of the counters.Type: GrantFiled: September 9, 2020Date of Patent: June 28, 2022Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Junfeng Dong, Ajay Kalhan, Manoj A. Syamala, Vivek R. Narasayya, Changsong Li, Shize Xu, Pankaj Arora, John M. Oslake, Arnd Christian König, Jiaqi Liu
-
Patent number: 11360669Abstract: The storage device includes a first memory, a process device that stores data in the first memory and reads the data from the first memory, and an accelerator that includes a second memory different from the first memory. The accelerator stores compressed data stored in one or more storage drives storing data, in the second memory, decompresses the compressed data stored in the second memory to generate plaintext data, extracts data designated in the process device from the plaintext data, and transmits the extracted designated data to the first memory.Type: GrantFiled: February 10, 2021Date of Patent: June 14, 2022Assignee: HITACHI, LTD.Inventors: Masahiro Tsuruya, Nagamasa Mizushima, Tomohiro Yoshihara, Kentaro Shimada
-
Patent number: 11354038Abstract: Aspects of the present disclosure provide a computer-implemented method that includes providing a layered index to variable length data, the layered index comprising a plurality of layers. Each layer of the plurality of layers has an index array, a block offset array, and a per-block size array. The index array identifies a next level index of a plurality of indices or data. The indices represent a delta value from a first index of a block. The block offset array identifies a starting location of the index array. The per-block array identifies a shared integer size of a block of indices. The method further includes performing a random access read of the variable length data using the layered index.Type: GrantFiled: July 19, 2019Date of Patent: June 7, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jinho Lee, Frank Liu