Parallel Caches Patents (Class 711/120)
  • Patent number: 11966590
    Abstract: A persistent memory device is disclosed. The persistent memory device may include a cache coherent interconnect interface. The persistent memory device may include a volatile storage and a non-volatile storage. The volatile storage may include at least a first area and a second area. A backup power source may be configured to provide backup power selectively to the second area of the volatile storage. A controller may control the volatile storage and the non-volatile storage. The persistent memory device may use the backup power source while transferring a data from the second area of the volatile storage to the non-volatile storage based at least in part on a loss of a primary power for the persistent memory device.
    Type: Grant
    Filed: July 5, 2022
    Date of Patent: April 23, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yang Seok Ki, Chanik Park, Sungwook Ryu
  • Patent number: 11960749
    Abstract: A host of a storage system is coupled to multiple SSDs. Each SSD is configured with a migration cache, and each SSD corresponds to one piece of access information. The host obtains migration data information of to-be-migrated data in a source SSD, determines a target SSD, and sends a read instruction carrying information about to-be-migrated data and the target SSD to the source SSD. The source SSD reads a data block according to the read instruction from a flash memory of the source SSD into a migration cache of the target SSD. After a read instruction is completed by the SSD, the host sends a write instruction to the target SSD to instruct the target SSD to write the data block in the cache of the target SSD to a flash memory of the target SSD.
    Type: Grant
    Filed: May 8, 2023
    Date of Patent: April 16, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Ge Du, Yu Hu, Jiancen Hou
  • Patent number: 11934863
    Abstract: A system to support a machine learning (ML) operation comprises an array-based inference engine comprising a plurality of processing tiles each comprising at least one or more of an on-chip memory (OCM) configured to maintain data for local access by components in the processing tile and one or more processing units configured to perform one or more computation tasks on the data in the OCM by executing a set of task instructions. The system also comprises a data streaming engine configured to stream data between a memory and the OCMs and an instruction streaming engine configured to distribute said set of task instructions to the corresponding processing tiles to control their operations and to synchronize said set of task instructions to be executed by each processing tile, respectively, to wait current certain task at each processing tile to finish before starting a new one.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: March 19, 2024
    Assignee: Marvell Asia Pte Ltd
    Inventors: Avinash Sodani, Senad Durakovic, Gopal Nalamalapu
  • Patent number: 11868627
    Abstract: A method for operating a processing unit. The processing unit addresses virtual memory areas in order to access a RAM memory unit and these individual virtual memory areas respectively being mapped onto a physical memory area of the RAM memory unit. A check of the RAM memory unit for errors is performed. If, in the course of this check, a physical memory area of the RAM memory unit is determined to be faulty, this faulty physical memory area is designated as faulty. A check is performed to determine whether a free physical memory area exists in RAM memory unit onto which no virtual memory area is mapped and which is not designated as faulty. If such a free physical memory area exists, the virtual memory area that is currently mapped onto the physical memory area recognized as faulty is henceforth mapped onto this free physical memory area.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: January 9, 2024
    Assignee: ROBERT BOSCH GMBH
    Inventors: Jens Breitbart, Sebastian Hoffmann
  • Patent number: 11848817
    Abstract: Techniques discussed herein relate to updating an edge device (e.g., a computing device distinct from and operating remotely with respect to a data center). The edge device can execute a first operating system (OS). A manifest specifying files of a second OS to be provisioned to the edge device may be obtained. The manifest may further specify a set of services to be provisioned at the edge device. One or more data files corresponding to a difference between a first set of data files associated with the first OS and a second set of data files associated with the second OS may be identified. A snapshot of the first OS may be generated, modified, and stored in memory of the edge device to configure the edge device with the second OS. The booting order of the edge device may be modified to boot utilizing the second OS.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: December 19, 2023
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Jonathon David Nelson, David Dale Becker
  • Patent number: 11783032
    Abstract: Disclosed herein are systems and methods for identifying and mitigating Flush-based cache attacks. The systems and methods can include adding a zombie bit to a cache line. The zombie bit can be used to track the status of cache hits and misses to the flushed line. A line that is invalidated due to a Flush-Caused Invalidation can be marked as a zombie line by marking the zombie bit as valid. If another hit, or access request, is made to the cache line, data retrieved from memory can be analyzed to determine if the hit is benign or is a potential attack. If the retrieved data is the same as the cache data, then the line can be marked as a valid zombie line. Any subsequent hit to the valid zombie line can be marked as a potential attack. Hardware- and software-based mitigation protocols are also described.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: October 10, 2023
    Assignee: Georgia Tech Research Corporation
    Inventor: Moinuddin Qureshi
  • Patent number: 11782848
    Abstract: Systems, apparatuses, and methods for implementing a speculative probe mechanism are disclosed. A system includes at least multiple processing nodes, a probe filter, and a coherent slave. The coherent slave includes an early probe cache to cache recent lookups to the probe filter. The early probe cache includes entries for regions of memory, wherein a region includes a plurality of cache lines. The coherent slave performs parallel lookups to the probe filter and the early probe cache responsive to receiving a memory request. An early probe is sent to a first processing node responsive to determining that a lookup to the early probe cache hits on a first entry identifying the first processing node as an owner of a first region targeted by the memory request and responsive to determining that a confidence indicator of the first entry is greater than a threshold.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: October 10, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Amit P. Apte, Ganesh Balakrishnan, Vydhyanathan Kalyanasundharam, Kevin M. Lepak
  • Patent number: 11768773
    Abstract: Provided are I/O request type specific cache directories in accordance with the present description. In one embodiment, by limiting track entries of a cache directory to a specific I/O request type, the size of the cache directory may be reduced as compared to general cache directories for I/O requests of all types, for example. As a result, look-up operations directed to such smaller size I/O request type specific cache directories may be completed in each directory more quickly. In addition, look-ups may frequently be successfully completed after a look-up of a single I/O request type specific cache directory, improving the speed of cache look-ups and providing a significant improvement in system performance. Other aspects and advantages are provided, depending upon the particular application.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: September 26, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gail Spear, Lokesh Mohan Gupta, Kevin J. Ash, Kyler A. Anderson
  • Patent number: 11601376
    Abstract: Systems, methods, apparatuses, and computer readable media may be configured for transferring of state data of a network connection established by a first device. In an example, a front end device of a cache cluster may establish a network connection with a client device and generate state data associated with the network connection. The front end device may receive a content request from the client device via the network connection and select one of a plurality of back end devices to provide the content item.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: March 7, 2023
    Assignee: Comcast Cable Communications, LLC
    Inventors: Kevin Johns, Allen Broome, Eric Rosenfeld, Richard Fliam
  • Patent number: 11520524
    Abstract: Devices and techniques for host adaptive memory device optimization are provided. A memory device can maintain a host model of interactions with a host. A set of commands from the host can be evaluated to create a profile of the set of commands. The profile can be compared to the host model to determine an inconsistency between the profile and the host model. An operation of the memory device can then be modified based on the inconsistency.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: December 6, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Nadav Grosz, David Aaron Palmer
  • Patent number: 11494078
    Abstract: Examples of the present disclosure provide apparatuses and methods related to a translation lookaside buffer in memory. An example method comprises receiving a command including a virtual address from a host translating the virtual address to a physical address on volatile memory of a memory device using a translation lookaside buffer (TLB).
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: November 8, 2022
    Assignee: Micron Technology, Inc.
    Inventors: John D. Leidel, Richard C. Murphy
  • Patent number: 11436106
    Abstract: A efficient method for long-term retention backup policy within recovery point objectives (RPO). Specifically, the disclosed method proposes a dynamic promotion scheme through which short-term retention backup copies, in compliance with specified long-term retention RPOs, may be promoted to render long-term retention backup copies. Further, the disclosed method not only looks to past and/or presently dated short-term retention backup copies, but also looks to prospective (or future) dated short-term retention backup copies, which are expected or predicted to be produced, for promotion. Moreover, in circumstances where there are no appropriate past, present, or future dated short-term retention backup copies to promote, the disclosed method triggers new backup operations to acquire the long-term retention backup copies necessary to maintain the specified long-retention RPOs.
    Type: Grant
    Filed: March 5, 2021
    Date of Patent: September 6, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Mengze Liao, Scott Randolph Quesnelle, Jinru Yan, Xiaoliang Zhu, Xiaolei Hu
  • Patent number: 11379236
    Abstract: An apparatus and method for hybrid software-hardware coherency.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: July 5, 2022
    Assignee: Intel Corporation
    Inventors: Pratik Marolia, Rajesh Sankaran
  • Patent number: 11354208
    Abstract: A first non-volatile dual in-line memory module (NVDIMM) of a first server and a second NVDIMM of a second server are armed during initial program load in a dual-server based storage system to configure the first NVDIMM and the second NVDIMM to retain data on power loss. Prior to initiating a safe data commit scan to destage modified data from the first server to a secondary storage, a determination is made as to whether the first NVDIMM is armed. In response to determining that the first NVDIMM is not armed, a failover is initiated to the second server.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: June 7, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthew G. Borlick, Sean Patrick Riley, Brian Anthony Rinaldi, Trung N. Nguyen, Lokesh M. Gupta
  • Patent number: 11327767
    Abstract: Embodiments of dynamically increasing the resources for a partition to compensate for an input/output (I/O) recovery event are provided. An aspect includes allocating a first set of resources to a partition that is hosted on a data processing system. Another aspect includes operating the partition on the data processing system using the first set of resources. Another aspect includes, based on detection of an input/output (I/O) recovery event associated with operation of the partition, determining a compensation for the I/O recovery event. Another aspect includes allocating a second set of resources in addition to the first set of resources to the partition, the second set of resources corresponding to the compensation for the I/O recovery event. Another aspect includes operating the partition on the data processing system using the first set of resources and the second set of resources.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: May 10, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Scott B. Compton, Peter Sutton, Harry M Yudenfriend, Dale F Riedy
  • Patent number: 11288134
    Abstract: An apparatus comprises a processing device configured to identify a snapshot lineage comprising (i) a local snapshot lineage stored on a storage system and (ii) a cloud snapshot lineage stored on cloud storage of at least one cloud external to the storage system. The processing device is also configured to select a snapshot to be copied from the local snapshot lineage to the cloud snapshot lineage, and to copy the selected snapshot by copying data stored in the local snapshot lineage to a checkpointing cache and, responsive to determining that the copied data in the checkpointing cache has reached a specified checkpoint size, moving the copied data from the checkpointing cache to the cloud storage. The processing device is further configured to maintain, in the checkpointing cache, checkpointing information utilizable for pausing and resuming copying of the selected snapshot from the local snapshot lineage to the cloud snapshot lineage.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: March 29, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Mithun Mahendra Varma, Shanmuga Anand Gunasekaran
  • Patent number: 11290565
    Abstract: Requests for data can be distributed among servers based on indicators of intent to access the data. For example, a kernel of a client device can receive a message from a software application. The message can indicate that the software application intends to access data at a future point in time. The kernel can transmit an electronic communication associated with the message to multiple servers. The kernel can receive a response to the electronic communication from a server of the multiple servers. Based on the response and prior to receiving a future request for the data from the software application, the kernel can select the server from among the multiple servers as a destination for the future request for the data.
    Type: Grant
    Filed: August 12, 2020
    Date of Patent: March 29, 2022
    Assignee: Red Hat, Inc.
    Inventors: Jay Vyas, Huamin Chen
  • Patent number: 11281382
    Abstract: According to one embodiment, a hardware-based processing node of a plurality of hardware-based processing nodes in an object memory fabric can comprise a memory module storing and managing a plurality of memory objects in a hierarchy of the object memory fabric. Each memory object can be created natively within the memory module, accessed using a single memory reference instruction without Input/Output (I/O) instructions, and managed by the memory module at a single memory layer. The object memory fabric can utilize a memory fabric protocol between the hardware-based processing node and one or more other nodes of the plurality of hardware-based processing nodes to distribute and track the memory objects across the object memory fabric. The memory fabric protocol can be utilized across a dedicated link or across a shared link between the hardware-based processing node and one or more other nodes of the plurality of hardware-based processing nodes.
    Type: Grant
    Filed: August 18, 2020
    Date of Patent: March 22, 2022
    Assignee: Ultrata, LLC
    Inventors: Steven J. Frank, Larry Reback
  • Patent number: 11271860
    Abstract: An example cache-coherent packetized network system includes: a home agent; a snooped agent; and a request agent configured to send, to the home agent, a request message for a first address, the request message having a first transaction identifier of the request agent; where the home agent is configured to send, to the snooped agent, a snoop request message for the first address, the snoop request message having a second transaction identifier of the home agent; and where the snooped agent is configured to send a data message to the request agent, the data message including a first compressed tag generated using a function based on the first address.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: March 8, 2022
    Assignee: XILINX, INC.
    Inventors: Millind Mittal, Jaideep Dastidar
  • Patent number: 11182373
    Abstract: Provided are a computer program product, system, and method for updating change information for current copy relationships when establishing a new copy relationship having overlapping data with the current copy relationships. A first copy relationship indicates changed first source data to copy to first target data. An establish request is processed to create a second copy relationship to copy second source data in to second target data. A second copy relationship is generated, in response to the establish request, indicating data in the second source data to copy to the second target data. A determination is made of overlapping data units in the first source data also in the second target data. Indication is made in the first copy relationship to copy the overlapping data units. The first source data indicated in the first copy relationship is copied to the first target data, including data for the overlapping data units.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: November 23, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Theresa M. Brown, Nedlaya Y. Francisco, Suguang Li, Mark L. Lipets, Gregory E. McBride, Carol S. Mellgren, Raul E. Saba
  • Patent number: 11182105
    Abstract: A storage device may include a first storage area, a second storage area, and a controller. The controller may be configured to provide device information containing information on the first and second storage areas to an external host device, to allow a first access type of the external host device to the first storage area, and to allow a second access type of the external host device to the second storage area.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: November 23, 2021
    Inventors: SeokHeon Lee, Won-Gi Hong, Youngmin Lee
  • Patent number: 11151033
    Abstract: A processor includes a plurality of cache memories, and a plurality of processor cores, each associated with one of the cache memories. Each of at least some of the cache memories is associated with information indicating whether data stored in the cache memory is shared among multiple processor cores.
    Type: Grant
    Filed: March 13, 2014
    Date of Patent: October 19, 2021
    Assignee: Tilera Corporation
    Inventors: David M. Wentzlaff, Matthew Mattina, Anant Agarwal
  • Patent number: 11138178
    Abstract: A device such as a data storage system comprises a non-transitory memory storage comprising instructions, and one or more processors in communication with the memory. The one or more processors execute the instructions to: map a different portion of data in a storage device to each of different caches, wherein each cache is in a computing node with a processor; change a number of the computing nodes; provide a modified mapping in response to the change; and pass queries to the computing nodes. The computing nodes can continue to operate uninterrupted while the number of computing nodes is changed. Data transfer between the nodes can also be avoided.
    Type: Grant
    Filed: November 10, 2016
    Date of Patent: October 5, 2021
    Assignee: Futurewei Technologies, Inc.
    Inventor: Kamini Jagtiani
  • Patent number: 11138125
    Abstract: A method for controlling a cache comprising receiving a request for data and determining whether the requested data is present in a first portion of the cache, a second portion of cache, or not in the cache. If the requested data is not located in the MRU portion of the cache, moving the data into the first portion of the cache.
    Type: Grant
    Filed: September 12, 2017
    Date of Patent: October 5, 2021
    Assignee: Taiwan Semiconductor Manufacturing Company Limited
    Inventor: Shih-Lien Linus Lu
  • Patent number: 11126564
    Abstract: Some examples described herein provide for a partially coherent memory transfer. An example method includes moving data directly from a coherence domain of an originating symmetric multiprocessor (SMP) node across a memory fabric to a target location for the data within a coherence domain of a receiving SMP node.
    Type: Grant
    Filed: January 12, 2016
    Date of Patent: September 21, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Mike Schlansker, Jean Tourrilhes
  • Patent number: 11068399
    Abstract: Technologies for enforcing coherence ordering in consumer polling interactions include a network interface controller (NIC) of a target computing device which is configured to receive a network packet, write the payload of the network packet to a data storage device of the target computing device, and obtain, subsequent to having transmitted a last write request to write the payload to the data storage device, ownership of a flag cache line of a cache of the target computing device. The NIC is additionally configured to receive a snoop request from a processor of the target computing device, identify whether the received snoop request corresponds to a read flag snoop request associated with an active request being processed by the NIC, and hold the received snoop request for delayed return in response to having identified the received snoop request as the read flag snoop request. Other embodiments are described herein.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: July 20, 2021
    Assignee: Intel Corporation
    Inventors: Bin Li, Chunhui Zhang, Ren Wang, Ram Huggahalli
  • Patent number: 11068172
    Abstract: Accessing data using a first storage device and a second storage device that is a synchronous mirror of the first storage device includes determining if the first and second storage devices support alternative mirroring that bypasses having the first storage device write data to the second storage device and choosing to write data to the first storage device only or both the first and second storage device based on criteria that includes metrics relating to timing, an identity of a calling process or application, a size of an I/O operation, an identity of a destination volume, a time of day, a particular host id, a particular application or set of applications, and/or particular datasets, extents, tracks, records/blocks. A single I/O operation may be bifurcated to provide a portion of the I/O operation to only the first storage device or to both the first storage device and the second storage device.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: July 20, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Douglas E. LeCrone, Paul A. Linstead, Brett A. Quinn
  • Patent number: 11003582
    Abstract: An embodiment of a semiconductor apparatus may include technology to determine workload-related information for a persistent storage media and a cache memory, and aggregate a bandwidth of the persistent storage media and the cache memory based on the determined workload information. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: May 11, 2021
    Assignee: Intel Corporation
    Inventors: Chace Clark, Francis Corrado
  • Patent number: 10979279
    Abstract: Method of clock synchronization in cloud computing. A plurality of physical computer assets are provided. The plurality of physical computer assets are linked together to form a virtualized computing cloud, the virtualized computing cloud having a centralized clock for coordinating the operation of the virtualized computing cloud; the virtualized computing cloud is logically partitioned into a plurality of virtualized logical server clouds, each of the virtualized logical server clouds having a local clock synchronized to the same centralized clock; a clock type from a clock palette is selected for at least one of the virtualized logical server clouds; the clock type is implemented in the at least one of the virtualized logical server clouds such that the clock type is synchronized to the at least one of the virtualized logical server clouds; and the centralized clock is disabled. The method may be performed on one or more computing devices.
    Type: Grant
    Filed: July 3, 2014
    Date of Patent: April 13, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Chandrashekhar G. Deshpande, Shankar S. Kalyana, Jigneshkumar K. Karia, Gandhi Sivakumar
  • Patent number: 10972142
    Abstract: Wireless networking transceiver circuitry for an integrated circuit device includes a plurality of wireless networking transceiver subsystems, each subsystem including respective processing circuitry configurable for coupling to radio circuitry to implement a respective set of protocol features selected from at least one overall set of protocol features. Memory circuitry is provided, sufficient to support a respective set of protocol features in each subsystem when at least one respective set of protocol features is smaller than the overall set of protocol features. Memory-sharing circuitry is provided, configurable to couple respective portions of the memory circuitry to the processing circuitry of respective subsystems. The memory circuitry and the memory-sharing circuitry may be outside the subsystems, or distributed within the subsystems. The memory may be 60% of an amount of memory sufficient to support the overall set of protocol features in all subsystems.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: April 6, 2021
    Assignee: NXP USA, Inc.
    Inventors: Timothy J. Donovan, Yui Lin, Lite Lo, Zhengqiang Huang
  • Patent number: 10949235
    Abstract: Disclosed are mechanisms to support integrating network semantics into communications between processor cores operating on the same server hardware. A network communications unit is implemented in a coherent domain with the processor cores. The network communications unit may be implemented on the CPU package, in one or more of the processor cores, and/or coupled via the coherent fabric. The processor cores and/or associated virtual entities communicate by transmitting packet headers via the network communications unit. When communicating locally compressed headers may be employed. The headers may omit specified fields and employ simplified addressing schemes for increased communication speed. When communicating locally, data can be moved between memory locations and/or pointers can be communicated to reduce bandwidth needed to transfer data. The network communications unit may maintain/access a local policy table containing rules governing communications between entities and enforce such rules accordingly.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: March 16, 2021
    Assignee: Intel Corporation
    Inventor: Uri Elzur
  • Patent number: 10942874
    Abstract: A method and system for managing command fetches by an Non-Volatile Memory express (NVMe) controller from a plurality of queues in a host maintains a predefined ratio of data throughput, based on the command fetches, between the plurality of queues. Each of the plurality of queues is assigned with a particular priority and weight.
    Type: Grant
    Filed: January 17, 2019
    Date of Patent: March 9, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Rajesh Kumar Sahoo, Aishwarya Ravichandran, Manoj Thapliyal
  • Patent number: 10929174
    Abstract: A distributed memory system including a plurality of chips, a plurality of nodes that are distributed across the plurality of chips such that each node is comprised within a chip, each node includes a dedicated local memory and a processor core, and each local memory is configured to be accessible over network communication, a network interface for each node, the network interface configured such that a corresponding network interface of each node is integrated in a coherence domain of the chip of the corresponding node, wherein each of the network interfaces are configured to support a one-sided operation, the network interface directly reading or writing in the dedicated local memory of the corresponding node without involving a processor core, and the one-sided operation is configured such that the processor core of a corresponding node uses a protocol to directly inject a remote memory access for read or write request to the network interface of the node, the remote memory access request allowing to read
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: February 23, 2021
    Assignee: ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE (EPFL)
    Inventors: Alexandros Daglis, Boris Robert Grot, Babak Falsafi
  • Patent number: 10922236
    Abstract: The present application discloses a cascade cache refreshing method, system, and device. The method in an embodiment of the present specification includes: determining a cache refreshing sequence based on a dependency relationship between caches in a cascade cache; and sequentially determining, based on the cache refreshing sequence, whether the caches in the cascade cache need to be refreshed, and refreshing a cache that needs to be refreshed, where when it is determined that a current cache needs to be refreshed, it is determined whether a cache following the current cache in the cache refreshing sequence needs to be refreshed after the current cache is refreshed.
    Type: Grant
    Filed: March 6, 2020
    Date of Patent: February 16, 2021
    Assignee: Advanced New Technologies Co., Ltd.
    Inventor: Yangyang Zhao
  • Patent number: 10901868
    Abstract: Embodiments described herein provide a mechanism to use an on-chip buffer memory in conjunction with an off-chip buffer memory for interim NAND write data storage. Specifically, the program data flows through the on-chip buffer memory to the NAND memory, while simultaneously a copy of the NAND program data is buffered in one or more circular buffer structures within the off-chip buffer memory.
    Type: Grant
    Filed: October 2, 2018
    Date of Patent: January 26, 2021
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: William W. Dennin, III, Chengkuo Huang
  • Patent number: 10860480
    Abstract: Embodiments of the present disclosure relate to a method and a device for cache management. The method includes: in response to receiving a write request for a cache logic unit, determining whether a first cache space of a plurality of cache spaces associated with the cache logic unit is locked; in response to the first cache space being locked, obtaining a second cache space from the plurality of cache spaces, the second cache space being different from the first cache space and being in an unlocked state; and performing, in the second cache space, the write request for the cache logic unit.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: December 8, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Lifeng Yang, Ruiyong Jia, Liam Xiongcheng Li, Hongpo Gao, Xinlei Xu
  • Patent number: 10860482
    Abstract: Apparatuses and methods for providing data to a configurable storage area are described herein. An example apparatus may include an extended address register including a plurality of configuration bits indicative of an offset and a size, an array having a storage area, a size and offset of the storage area based, at least in part, on the plurality of configuration bits, and a buffer configured to store data, the data including data intended to be stored in the storage area. A memory control unit may be coupled to the buffer and configured to cause the buffer to store the data intended to be stored in the storage area in the storage area of the array responsive, at least in part, to a flush command.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: December 8, 2020
    Assignee: Micron Technology, Inc.
    Inventors: Graziano Mirichigni, Luca Porzio, Erminio Di Martino, Giacomo Bernardi, Domenico Monteleone, Stefano Zanardi, Chee Weng Tan, Sebastien LeMarie, Andre Klindworth
  • Patent number: 10846231
    Abstract: To prevent an excessive increase of a dirty data amount in a cache memory. A processor acquires storage device information from each of storage devices. When receiving a write request to a first storage device group from a higher-level apparatus, the processor determines whether a write destination cache area corresponding to a write destination address indicated by the write request is reserved. When determining that the write destination cache area is not reserved, the processor performs, on the basis of the storage device information and cache information, reservation determination for determining whether to reserve the write destination cache area. When determining to reserve the write destination cache area, the processor reserves the write destination cache area. When determining not to reserve the write destination cache area, the processor stands by for the reservation of the write destination cache area.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: November 24, 2020
    Assignee: HITACHI, LTD.
    Inventors: Natsuki Kusuno, Toshiya Seki, Tomohiro Nishimoto, Takaki Matsushita
  • Patent number: 10833704
    Abstract: Low-density parity check (LDPC) decoder circuitry is configured to decode an input codeword using a plurality of circulant matrices of a parity check matrix for an LDPC code. Multiple memory banks are configured to store elements of the input codeword. A memory circuit is configured for storage of an instruction sequence. Each instruction describes for one of the circulant matrices, a corresponding layer and column of the parity check matrix and a rotation. Each instruction includes packing factor bits having a value indicative of a number of instructions of the instruction sequence to be assembled in a bundle of instructions. A bundler circuit is configured to assemble the number of instructions from the memory circuit in a bundle. The bundler circuit specifies one or more no-operation codes (NOPs) in the bundle in response to the value of the packing factor bits and provides the bundle to the decoder circuitry.
    Type: Grant
    Filed: December 12, 2018
    Date of Patent: November 10, 2020
    Assignee: Xilinx, Inc.
    Inventors: Richard L. Walke, Andrew Dow, Zahid Khan
  • Patent number: 10809923
    Abstract: According to one embodiment, a hardware-based processing node of a plurality of hardware-based processing nodes in an object memory fabric can comprise a memory module storing and managing a plurality of memory objects in a hierarchy of the object memory fabric. Each memory object can be created natively within the memory module, accessed using a single memory reference instruction without Input/Output (I/O) instructions, and managed by the memory module at a single memory layer. The object memory fabric can utilize a memory fabric protocol between the hardware-based processing node and one or more other nodes of the plurality of hardware-based processing nodes to distribute and track the memory objects across the object memory fabric. The memory fabric protocol can be utilized across a dedicated link or across a shared link between the hardware-based processing node and one or more other nodes of the plurality of hardware-based processing nodes.
    Type: Grant
    Filed: February 4, 2019
    Date of Patent: October 20, 2020
    Assignee: Ultrata, LLC
    Inventors: Steven J. Frank, Larry Reback
  • Patent number: 10795820
    Abstract: Apparatus and a corresponding method of operating the apparatus, in a coherent interconnect system comprising a requesting master device and a data-storing slave device, are provided. The apparatus maintains records of coherency protocol transactions received from the requesting master device whilst completion of the coherency protocol transactions are pending and is responsive to reception of a read transaction from the requesting master device for a data item stored in the data-storing slave device to issue a direct memory transfer request to the data-storing slave device. A read acknowledgement trigger is added to the direct memory transfer request and in response to reception of a read acknowledgement signal from the data-storing slave device a record created by reception of the read transaction is updated corresponding to completion of the direct memory transfer request.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: October 6, 2020
    Assignee: ARM Limited
    Inventors: Phanindra Kumar Mannava, Bruce James Mathewson, Jamshed Jalal, Tushar P Ringe
  • Patent number: 10776266
    Abstract: Aspects of the present disclosure relate to an apparatus comprising a requester master processing device having an associated private cache storage to store data for access by the requester master processing device. The requester master processing device is arranged to issue a request to modify data that is associated with a given memory address and stored in a private cache storage associated with a recipient master processing device. The private cache storage associated with the recipient master processing device is arranged to store data for access by the recipient master processing device. The apparatus further comprises the recipient master processing device having its private cache storage. One of the recipient master processing device and its associated private cache storage is arranged to perform the requested modification of the data while the data is stored in the cache storage associated with the recipient master processing device.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: September 15, 2020
    Assignee: Arm Limited
    Inventors: Joshua Randall, Alejandro Rico Carro, Jose Alberto Joao, Richard William Earnshaw, Alasdair Grant
  • Patent number: 10771601
    Abstract: Requests for data can be distributed among servers based on indicators of intent to access the data. For example, a kernel of a client device can receive a message from a software application. The message can indicate that the software application intends to access data at a future point in time. The kernel can transmit an electronic communication associated with the message to multiple servers. The kernel can receive a response to the electronic communication from a server of the multiple servers. Based on the response and prior to receiving a future request for the data from the software application, the kernel can select the server from among the multiple servers as a destination for the future request for the data.
    Type: Grant
    Filed: May 15, 2017
    Date of Patent: September 8, 2020
    Assignee: Red Hat, Inc.
    Inventors: Jay Vyas, Huamin Chen
  • Patent number: 10740245
    Abstract: Embodiments of the present disclosure provide a method, device and computer program product for cache management. The method includes: receiving from a storage device an indication for an invalid storage block in the storage device; in response to receiving the indication, looking up a cache page associated with the invalid storage block in the cache; in response to finding the cache page in the cache, determining validity of data in the cache page; and in response to the data being invalid, reclaiming the cache page.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: August 11, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Ruiyong Jia, Xinlei Xu, Lifeng Yang, Liam Xiongcheng Li, Yousheng Liu
  • Patent number: 10698818
    Abstract: Systems and techniques for performing a data transaction are disclosed that provide data redundancy using two or more cache devices. In some embodiments, a data transaction is received by a storage controller of a storage system from a host system. The storage controller caches data and/or metadata associated with the data transaction to at least two cache devices that are discrete from the storage controller. After caching, the storage controller provides a transaction completion response to the host system from which the transaction was received. In some examples, each of the at least two cache devices includes a storage class memory. In some examples, the storage controller caches metadata to the at least two cache devices and to a controller cache of the storage controller, while data is cached to the at least two cache devices without being cached in the controller cache.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: June 30, 2020
    Assignee: NETAPP, INC.
    Inventors: Brian McKean, Gregory Friebus, Sandeep Kumar R. Ummadi, Pradeep Ganesan
  • Patent number: 10657700
    Abstract: Ray tracing systems have computation units (“RACs”) adapted to perform ray tracing operations (e.g. intersection testing). There are multiple RACs. A centralized packet unit controls the allocation and testing of rays by the RACs. This allows RACs to be implemented without Content Addressable Memories (CAMs) which are expensive to implement, but the functionality of CAMs can still be achieved by implemented them in the centralized controller.
    Type: Grant
    Filed: November 1, 2017
    Date of Patent: May 19, 2020
    Assignee: Imagination Technologies Limited
    Inventors: Joseph M. Richards, Luke T. Peterson, Steven J. Clohset
  • Patent number: 10649907
    Abstract: An apparatus is provided having processing circuitry for executing multiple items of supervised software under the control of a supervising element, and a set associative address translation cache having a plurality of entries, where each entry stores address translation data used when converting a virtual address into a corresponding physical address of a memory system comprising multiple pages. The address translation data is obtained by a multi-stage address translation process comprising a first stage translation process managed by an item of supervised software and a second stage translation process managed by the supervising element.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: May 12, 2020
    Assignee: Arm Limited
    Inventor: Abhishek Raja
  • Patent number: 10642704
    Abstract: A storage controller failover system includes servers, storage controllers coupled to storage subsystems, and a switching system coupling the servers to the storage controllers. A storage controller configurations and storage controller caches for each of the storage controllers are stored in one or more database. A failure is detected of a first storage controller that has provided first storage communications along a first path between a first server and a first storage subsystem and, in response, a second storage controller that is configured to take over the first storage communications from the first storage controller is determined based on its second storage controller configuration. A first storage controller cache for the first storage controller is provided to the second storage controller, and the second storage controller is caused to provide the first storage communications along a second path between the first server and the first storage subsystem.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: May 5, 2020
    Assignee: Dell Products L.P.
    Inventors: Lucky Pratap Khemani, Kala Sampathkumar
  • Patent number: 10642520
    Abstract: In a distributed data processing system with a set of multiple nodes, a first data shuffle memory pool is maintained at a data shuffle writer node, and a second data shuffle memory pool is maintained at a data shuffle reader node. The data shuffle writer node and the data shuffle reader node are part of the set of multiple nodes of the distributed data processing system. In-memory compression is performed on at least a portion of a data set from the first data shuffle memory pool. At least a portion of the compressed data is transmitted from the first data shuffle memory pool to the second data shuffle memory pool in a peer-to-peer manner. Each of the first data shuffle memory pool and the second data shuffle memory pool may include a hybrid memory configuration.
    Type: Grant
    Filed: April 18, 2017
    Date of Patent: May 5, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Junping Zhao, Kenneth J. Taylor, Randall Shain, Kun Wang
  • Patent number: 10528275
    Abstract: A storage system includes a first storage control device including a first memory being a volatile memory and a first processor, and a second storage control device including a second memory being a non-volatile memory and a second processor, wherein the second processor is configured to receive a first write request to write first data into a first storage device, store the first data into the second memory, and transmit the first data to the first storage control device, the first processor is configured to store the first data into the first memory, and transmit a first notification to the second storage control device, and the second processor is configured to receive the first notification, transmit a first completion notification in response to the first write request, and execute processing to write the first data, stored in the second memory, into the first storage device.
    Type: Grant
    Filed: June 6, 2017
    Date of Patent: January 7, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Toshihiko Suzuki