Caching Patents (Class 711/118)
  • Patent number: 11194755
    Abstract: A method for providing multi-tenancy support for RDMA in a system that includes a plurality of physical hosts. Each physical host hosts a set of data compute nodes (DCNs). The method, at an RDMA protocol stack of the first host, receives a packet that includes a request from a first DCN hosted on a first host for RDMA data transfer from a second DCN hosted on a second host. The method sends a set of parameters of an overlay network that are associated with the first DCN to an RDMA physical network interface controller of the first host. The set of parameters are used by the RDMA physical NIC to encapsulate the packet with an RDMA data transfer header and an overlay network header by using the set of parameters of the overlay network to transfer the encapsulated packet to the second physical host using the overlay network.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: December 7, 2021
    Assignee: NICIRA, INC.
    Inventors: Shoby Cherian, Tanuja Ingale, Raghavendra Subbarao Narahari Venkata
  • Patent number: 11194512
    Abstract: A data storage device may include: a nonvolatile memory device; and a controller configured to control a read operation of the nonvolatile memory device, wherein the controller includes: a memory configured to store workload pattern information; and a processor configured to check a workload pattern in a first period based on the workload pattern information, and decide on a read mode to be performed in a second period following the first period, according to the workload pattern of the first period.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: December 7, 2021
    Assignee: SK hynix Inc.
    Inventors: Min Gu Kang, Jin Soo Kim
  • Patent number: 11188234
    Abstract: The present disclosure includes apparatuses and methods related to a memory system with cache line data. An example apparatus can store data in a number of cache lines in the cache, wherein each of the number of lines includes a number of chunks of data that are individually accessible.
    Type: Grant
    Filed: August 30, 2017
    Date of Patent: November 30, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Cagdas Dirik, Robert M. Walker
  • Patent number: 11182081
    Abstract: Provided are techniques for performing a recovery copy command to restore a safeguarded copy backup to a production volume. In response to receiving a recovery copy command, a production target data structure is created. A read operation is received for data for a storage location. In response to determining that the data for the storage location is in a cache of a host and a generation number is greater than a recovery generation number, the data is read from the cache. In response to determining at least one of that the data for the storage location is not in the cache and that the generation number is not greater than the recovery generation number, the data is read from one of the production volume and a backup volume based on a value of an indicator for the storage location in the production target data structure.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: November 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Theresa M. Brown, Nedlaya Y. Francisco, Nicolas M. Clayton, Mark L. Lipets, Carol S. Mellgren, Gregory E. McBride, David Fei, Kevin Lin
  • Patent number: 11182306
    Abstract: A processor applies a software hint policy to a portion of a cache based on access metrics for different test regions of the cache, wherein each test region applies a different software hint policy for data associated with cache entries in each region of the cache. One test region applies a software hint policy under which software hints are followed. The other test region applies a software hint policy under which software hints are ignored. One of the software hint policies is selected for application to a non-test region of the cache.
    Type: Grant
    Filed: November 23, 2016
    Date of Patent: November 23, 2021
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Paul Moyer
  • Patent number: 11169806
    Abstract: In a data processor comprising a processing pass circuit and a cache, a record of processing passes to be performed by the processing pass circuit is stored, including storing, for each processing pass recorded in the buffer: information indicative of any data that is required for performing the processing pass that is not yet stored in the cache, including an identifier for any data that is required for performing the processing pass that is not yet stored in the cache. When new data to perform a processing pass is loaded into the cache, an identifier for that new data is compared to any identifiers for data that processing passes in the processing pass record are waiting for, and the information indicative of any data that is required for performing processing passes that is not yet stored in the cache stored in the processing pass record is updated.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: November 9, 2021
    Assignee: Arm Limited
    Inventors: Antonio Garcia Guirado, Marcelo Orenes Vera
  • Patent number: 11170432
    Abstract: A trend setting score that identifies a degree of trend setting exhibited by a user is generated for each of multiple users. This degree of trend setting exhibited by the user is an indication of how well the user identifies trends for items (e.g., consumes items) prior to the items becoming popular. The item consumption of users with high trend setting scores is then used to identify items that are expected to become popular after a lag in time. For a given user, another user with a high trend setting score (also referred to as a trendsetter) and having a high affinity with (e.g., similar item consumption behavior or characteristics) the given user is identified. Recommendations are provided to the given user based on items consumed by the trendsetter prior to the items becoming popular.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: November 9, 2021
    Assignee: Adobe Inc.
    Inventor: Michele Saad
  • Patent number: 11163492
    Abstract: A method for use in a storage system, the method comprising: receiving an I/O command; identifying a latency of a first storage device that is associated with the I/O command; and executing the I/O command at least in part based on the latency, wherein executing the I/O command based on the latency includes: performing a first action when the latency is less than a first threshold, and performing a second action when the latency is greater than the first threshold, wherein identifying the latency includes retrieving the latency from a latency database, and wherein the first storage device is part of a storage array, the storage array including one or more second storage devices in addition to the first storage device.
    Type: Grant
    Filed: February 13, 2020
    Date of Patent: November 2, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Lior Kamran, Alex Soukhman
  • Patent number: 11163688
    Abstract: Systems, apparatuses, and methods for employing system probe filter aware last level cache insertion bypassing policies are disclosed. A system includes a plurality of processing nodes, a probe filter, and a shared cache. The probe filter monitors a rate of recall probes that are generated, and if the rate is greater than a first threshold, then the system initiates a cache partitioning and monitoring phase for the shared cache. Accordingly, the cache is partitioned into two portions. If the hit rate of a first portion is greater than a second threshold, then a second portion will have a non-bypass insertion policy since the cache is relatively useful in this scenario. However, if the hit rate of the first portion is less than or equal to the second threshold, then the second portion will have a bypass insertion policy since the cache is less useful in this case.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: November 2, 2021
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Paul James Moyer, Jay Fleischman
  • Patent number: 11157214
    Abstract: In accordance with an embodiment of the present disclosure, an operating method of a controller for controlling a nonvolatile memory device may include: generating pre-read information based on a first read request, reading out 1st sub-chunks respectively included in a plurality of data chunks from the nonvolatile memory device, and providing a host with the read 1st sub-chunks, wherein the first read request includes respective addresses of the 1st sub-chunks; starting, after the 1st sub-chunks are provided to the host, a pre-read operation of reading out 2nd sub-chunks respectively included in the plurality of data chunks from the nonvolatile memory device based on the pre-read information and storing the read 2nd sub-chunks into a memory in the controller; and providing, after the pre-read operation is started, the host with the 2nd sub-chunks stored in the memory in response to a second read request received from the host.
    Type: Grant
    Filed: February 13, 2020
    Date of Patent: October 26, 2021
    Assignee: SK hynix Inc.
    Inventor: Seok Jun Lee
  • Patent number: 11157561
    Abstract: Methods, apparatus and computer software products implement embodiments of the present invention that include receiving requests from clients to access a corpus of data that is replicated on a group of servers, and distributing the requests among the servers for execution in accordance with an allocation function, which indicates a respective fraction of the requests that is to be assigned to each of the servers for execution. Respective cache miss rates incurred by the servers in responding to the requests that are distributed to each of the servers are measured, and the allocation function is adjusted responsively to the cache miss rates.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: October 26, 2021
    Assignee: SCYLLA DB LTD.
    Inventors: Nadav Har'El, Gleb Natapov, Avi Kivity
  • Patent number: 11157441
    Abstract: A microprocessor system comprises a computational array and a hardware data formatter. The computational array includes a plurality of computation units that each operates on a corresponding value addressed from memory. The values operated by the computation units are synchronously provided together to the computational array as a group of values to be processed in parallel. The hardware data formatter is configured to gather the group of values, wherein the group of values includes a first subset of values located consecutively in memory and a second subset of values located consecutively in memory. The first subset of values is not required to be located consecutively in the memory from the second subset of values.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: October 26, 2021
    Assignee: Tesla, Inc.
    Inventors: Emil Talpes, William McGee, Peter Joseph Bannon
  • Patent number: 11157406
    Abstract: A processing system server and methods for performing asynchronous data store operations. The server includes a processor which maintains a cache of objects in communication with the server. The processor executes an asynchronous computation to determine the value of a first object. In response to a request for the first object occurring before the asynchronous computation has determined the value of the first object, a value of the first object is returned from the cache. In response to a request for the first object occurring after the asynchronous computation has determined the value of the first object, a value of the first object determined by the asynchronous computation is returned. The asynchronous computation may comprise at least one future, such as a ListenableFuture, or at least one process or thread. Execution of an asynchronous computation may occur with a frequency correlated with how frequently the object changes or how important it is to have a current value of the object.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: October 26, 2021
    Assignee: International Business Machines Corporation
    Inventor: Arun Iyengar
  • Patent number: 11150933
    Abstract: Techniques for optimizing CPU usage in a host system based on VM guest OS power and performance management are provided. In one embodiment, a hypervisor of the host system can capture information from a VM guest OS that pertains to a target power or performance state set by the guest OS for a vCPU of the VM. The hypervisor can then perform, based on the captured information, one or more actions that align usage of host CPU resources by the vCPU with the target power or performance state.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: October 19, 2021
    Assignee: VMware, Inc.
    Inventors: Andrei Warkentin, Cyprien Laplace, Regis Duchesne, Ye Li, Alexander Fainkichen
  • Patent number: 11151034
    Abstract: Cache storage comprising cache lines, each configured to store respective data entries. The cache storage is configured to store a tag in the form of: an individual tag portion which is individual to a cache line; a shareable tag portion which is shareable between cache lines; and pointer data which associates an individual tag portion with a shareable tag portion.
    Type: Grant
    Filed: September 12, 2018
    Date of Patent: October 19, 2021
    Assignee: Arm Limited
    Inventors: Antonio García Guirado, Andreas Due Engh-Halstvedt
  • Patent number: 11151046
    Abstract: The present disclosure is directed to systems and methods of implementing a neural network using in-memory mathematical operations performed by pipelined SRAM architecture (PISA) circuitry disposed in on-chip processor memory circuitry. A high-level compiler may be provided to compile data representative of a multi-layer neural network model and one or more neural network data inputs from a first high-level programming language to an intermediate domain-specific language (DSL). A low-level compiler may be provided to compile the representative data from the intermediate DSL to multiple instruction sets in accordance with an instruction set architecture (ISA), such that each of the multiple instruction sets corresponds to a single respective layer of the multi-layer neural network model. Each of the multiple instruction sets may be assigned to a respective SRAM array of the PISA circuitry for in-memory execution.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: October 19, 2021
    Assignee: Intel Corporation
    Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Victor Lee, Huseyin Sumbul, Gregory Chen, Raghavan Kumar, Phil Knag, Ram Krishnamurthy, Ian Young, Abhishek Sharma
  • Patent number: 11151167
    Abstract: Embodiments may provide a cache for query results that can adapt the cache-space utilization to the popularity of the various topics represented in the query stream. For example, a method for query processing may perform receiving a plurality of queries for data, determining at least one topic associated with each query, and requesting data responsive to each query from a data cache comprising a plurality of partitions, including at least a static cache partition, a dynamic cache partition, and a temporal cache partition, the temporal cache partition may store data based on a topic associated with the data, and may be further partitioned into a plurality of topic portions, each portion may store data relating to an associated topic, wherein the associated topic may be selected from among determined topics of queries received by the computer system, and the data cache may retrieve data for the queries from the computer system.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: October 19, 2021
    Assignee: Georgetown University
    Inventors: Ophir Frieder, Ida Mele, Raffaele Perego, Nicola Tonellotto
  • Patent number: 11151047
    Abstract: A data processing device includes a cache. The cache stores data. The data processing device also includes a cache manager. The cache manager monitors use of the cache to obtain cache use data. The cache manager identifies a slot allocation of the cache. The cache manager generates a new slot allocation based on the cache use data and the slot allocation. The cache manager reformats the cache based on the new slot allocation to obtain an updated cache.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: October 19, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Kaustubh S. Sahasrabudhe, Steven John Ivester
  • Patent number: 11144480
    Abstract: The invention relates to a method for updating a variable shared between multiple processor cores. The following steps are implemented during execution in one of the cores of a local scope atomic read-modify-write instruction (AFA), having a memory address (a1) of the shared variable as a parameter: performing operations of the atomic instruction in a cache line (L(a1)) allocated to the memory address; and locally locking the cache line (LCK) while authorizing access to the shared variable by cores connected to another cache memory of same level during execution of the local scope atomic instruction.
    Type: Grant
    Filed: March 7, 2017
    Date of Patent: October 12, 2021
    Assignee: KALRAY
    Inventors: Benoit Dupont De Dinechin, Marta Rybczynska, Vincent Ray
  • Patent number: 11144499
    Abstract: A system and method logs update queries by epoch, including at checkpoints performed at various times.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: October 12, 2021
    Assignee: Omnisci, Inc.
    Inventor: Todd L. Mostak
  • Patent number: 11144498
    Abstract: Techniques are provided for managing objects within an object store. An object is maintained within an object store. The object comprises a plurality of slots. Each slot is used to store a unit of data accessible to applications hosted by remote computing devices. The object comprises an object header used to store metadata for each slot. A determination is made that the object is a fragmented object comprising an in-use slot of in-use data and a freed slot from which data was freed. The object is compacted to retain in-use data and exclude freed data as a rewritten object.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: October 12, 2021
    Assignee: NetApp Inc.
    Inventors: Tijin George, Jagavar Nehra, Roopesh Chuggani, Dnyaneshwar Nagorao Pawar, Atul Ramesh Pandit, Kiyoshi James Komatsu
  • Patent number: 11144414
    Abstract: The present invention discloses a method and device for managing a storage system. Specifically, in one embodiment of the present invention, there is proposed a method for managing a storage system, the storage system comprising a buffer device and a plurality of storage devices. The method comprises: receiving an access request with respect to the storage system; determining a storage device among the plurality of storage devices has been failed; and in response to the access request being an access request with respect to the failed storage device, serving the access request with data in the buffer device so as to reduce internal data access in the storage system. In one embodiment of the present invention, there is proposed a device for managing a storage system.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: October 12, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Xinlei Xu, Jian Gao, Yousheng Liu, Changyu Feng, Geng Han
  • Patent number: 11138121
    Abstract: A data management method for a processor to which a first cache, a second cache, and a behavior history table are allocated, includes tracking reuse information learning cache lines stored in at least one of the first cache and the second cache; recording the reuse information in the behavior history table; and determining a placement policy with respect to future operations that are to be performed on a plurality of cache lines stored in the first cache and the second cache, based on the reuse information in the behavior history table.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: October 5, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Erik Ernst Hagersten, Andreas Karl Sembrant, David Black-Schaffer
  • Patent number: 11138106
    Abstract: Provided are integrated circuit devices and methods for operating integrated circuit devices. In various examples, the integrated circuit device can include a target port operable to receive transactions from a master port. The target port can be configured with a multicast address range that is associated with a plurality of indices corresponding to memory banks of the device. When the target port receives a write transaction that has an address that is within the multicast address range, the target port can determine an index from the plurality of indices, and can use the index to determine a second address, which combines the index and the offset value with the address. The target port can then use the second address to write the data to the memory.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: October 5, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Ron Diamant, Randy Renfu Huang
  • Patent number: 11138117
    Abstract: In described examples, a processor system includes a processor core generating memory transactions, a lower level cache memory with a lower memory controller, and a higher level cache memory with a higher memory controller having a memory pipeline. The higher memory controller is connected to the lower memory controller by a bypass path that skips the memory pipeline. The higher memory controller: determines whether a memory transaction is a bypass write, which is a memory write request indicated not to result in a corresponding write being directed to the higher level cache memory; if the memory transaction is determined a bypass write, determines whether a memory transaction that prevents passing is in the memory pipeline; and if no transaction that prevents passing is determined to be in the memory pipeline, sends the memory transaction to the lower memory controller using the bypass path.
    Type: Grant
    Filed: May 20, 2020
    Date of Patent: October 5, 2021
    Assignee: Texas Instruments Incorporated
    Inventors: Abhijeet Ashok Chachad, Timothy David Anderson, Kai Chirca, David Matthew Thompson
  • Patent number: 11137934
    Abstract: A memory block type processing method includes: a state of each memory block in the electronic device is monitored; when it is monitored that a memory block is released, a type of the released memory block is detected; responsive to that it is detected that an original type of the released memory block is inconsistent with a present type of the released memory block, a released memory capacity of the released memory block is detected; and the present type of the released memory block is adjusted according to the detected released memory capacity. Therefore, sufficient regional ranges of memory partitions of the reclaimable and/or movable types can be ensured, and during memory compaction, a continuous memory that is sufficient for a user to use can be effectively provided by compaction to alleviate the memory fragmentation problem.
    Type: Grant
    Filed: June 11, 2018
    Date of Patent: October 5, 2021
    Assignee: ONEPLUS TECHNOLOGY (SHENZHEN) CO., LTD.
    Inventors: Kengyu Lin, Wenyen Chang
  • Patent number: 11137942
    Abstract: Embodiments of the present disclosure relate to a memory system, a memory controller, and an operation method. The embodiments receive a plurality of requests for a memory device, determine the number of hit requests and the number of miss requests with respect to the plurality of received requests, and determine whether or not to perform all or some of map data read operations for the respective miss requests in parallel and whether or not to perform all or some of user data read operations for the respective hit requests in parallel, thereby minimizing the time required for processing the plurality of requests.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: October 5, 2021
    Assignee: SK hynix Inc.
    Inventor: Jeen Park
  • Patent number: 11132145
    Abstract: Disclosed herein are techniques for reducing write amplification when processing write commands directed to a non-volatile memory. According to some embodiments, the method can include the steps of (1) receiving a first plurality of write commands and a second plurality of write commands, where the first plurality of write commands and the second plurality of write commands are separated by a fence command (2) caching the first plurality of write commands, the second plurality of write commands, and the fence command, and (3) in accordance with the fence command, and in response to identifying that at least one condition is satisfied: (i) issuing the first plurality of write commands to the non-volatile memory, (ii) issuing the second plurality of write commands to the non-volatile memory, and (iii) updating log information to reflect that the first plurality of write commands precede the second plurality of write commands.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: September 28, 2021
    Assignee: Apple Inc.
    Inventors: Yuhua Liu, Andrew W. Vogan, Matthew J. Byom, Alexander Paley
  • Patent number: 11134118
    Abstract: Embodiments of the present application provide a method, system, and a storage medium for a browser application to load a target page's first screen. The method comprises: in response to a request to load a target page, obtaining page information of the target page from a local storage associated with the browser application; rendering the page information of the target page, and requesting first screen information of the target page through a network; comparing the first screen information obtained through the network with the page information of the target page to determine whether the page information of the target page is updated; and in response to the page information of the target page being determined as updated, continuing to render the target page's first screen based on the first screen information obtained through the network.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: September 28, 2021
    Assignee: ALIBABA GROUP HOLDING LIMITED
    Inventor: Xiang Liu
  • Patent number: 11126451
    Abstract: A technique includes changing a configuration setting of a virtual volume of data stored in a storage system. The technique includes converting data of the virtual volume in place to reflect the changing of the configuration setting.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: September 21, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Siamak Nazari, Srinivasa D. Murthy
  • Patent number: 11126350
    Abstract: Embodiments of the invention provide systems and methods to implement an object memory fabric. Object memory modules may include object storage storing memory objects, memory object meta-data, and a memory module object directory. Each memory object and/or memory object portion may be created natively within the object memory module and may be a managed at a memory layer. The memory module object directory may index all memory objects and/or portions within the object memory module. A hierarchy of object routers may communicatively couple the object memory modules. Each object router may maintain an object cache state for the memory objects and/or portions contained in object memory modules below the object router in the hierarchy. The hierarchy, based on the object cache state, may behave in aggregate as a single object directory communicatively coupled to all object memory modules and to process requests based on the object cache state.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: September 21, 2021
    Assignee: Ultrata, LLC
    Inventors: Steven J. Frank, Larry Reback
  • Patent number: 11126536
    Abstract: Facilitating recording a trace of code execution using a processor cache. A method includes identifying an operation by a processing unit on a line of the cache. Based on identifying the operation, accounting bits for the cache line are set. Setting the accounting bits includes (i) setting the accounting bits to a reserved value when the operation is a write and tracing is disabled, (ii) setting the accounting bits to an index of the processing unit when the operation is a write and the accounting bits for the cache line are set to a value other than the index of the processing unit, or (iii) setting the accounting bits to the index of the processing unit when the operation is a read that is consumed by the processing unit and the accounting bits for the cache line are set to a value other than the index of the processing unit.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: September 21, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Jordi Mola
  • Patent number: 11119742
    Abstract: A system for cache efficient reading of column values in a database is provided. In some aspects, the system performs operations including pre-fetching, asynchronously and in response to a request for data in a column store database system, a plurality of first values associated with the requested data. The request may identify a row of the column store database system associated with the requested data. The plurality of first values may be located in the row. The operations may further include storing the plurality of first values in a cache memory. The operations may further include pre-fetching, asynchronously and based on the plurality of first values, a plurality of second values. The operations may further include storing the plurality of second values in the cache memory. The operations may further include reading, in response to the storing the plurality of second values, the requested data from the cache memory.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: September 14, 2021
    Assignee: SAP SE
    Inventor: Thomas Legler
  • Patent number: 11121958
    Abstract: Technologies for protocol execution include a command device to broadcast a protocol message to a plurality of computing devices and receive an aggregated status message from an aggregation system. The aggregated status message identifies a success or failure of execution of instructions corresponding with the protocol message by the plurality of computing devices such that each computing device of the plurality of computing devices that failed is uniquely identified and the success of remaining computing devices is aggregated into a single success identifier.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: September 14, 2021
    Assignee: INTEL CORPORATION
    Inventor: Matthias Schunter
  • Patent number: 11120887
    Abstract: An embodiment method for writing to a volatile memory comprises at least receiving a request to write to the memory, and, in response to each request to write to the memory: preparation of data to be written to the memory, this comprising computing an error correction code; storing in a buffer register the data to be written to the memory; and, if no new request to write to or to read from the memory is received after the storage, writing to the memory of the data to be written stored in the buffer register.
    Type: Grant
    Filed: November 12, 2020
    Date of Patent: September 14, 2021
    Assignee: STMICROELECTRONICS (ROUSSET) SAS
    Inventors: Christophe Eva, Jean-Michel Gril-Maffre
  • Patent number: 11112975
    Abstract: Described is a technology by which a virtual hard disk is migrated from a source storage location to a target storage location without needing any shared physical storage, in which a machine may continue to use the virtual hard disk during migration. This facilitates use the virtual hard disk in conjunction with live-migrating a virtual machine. Virtual hard disk migration may occur fully before or after the virtual machine is migrated to the target host, or partially before and partially after virtual machine migration. Background copying, sending of write-through data, and/or servicing read requests may be used in the migration. Also described is throttling data writes and/or data communication to manage the migration of the virtual hard disk.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: September 7, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dustin L. Green, Jacob K. Oshins, Lars Reuther
  • Patent number: 11099993
    Abstract: Techniques for loading data, comprising receiving a memory management command to perform a memory management operation to load data into the cache memory before execution of an instruction that requests the data, formatting the memory management command into one or more instruction for a cache controller associated with the cache memory, and outputting an instruction to the cache controller to load the data into the cache memory based on the memory management command.
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: August 24, 2021
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Kai Chirca, Daniel Wu, Matthew David Pierson
  • Patent number: 11101002
    Abstract: A semiconductor memory device includes a memory cell array; a page buffer circuit including a plurality of page buffers which are coupled to the memory cell array through a plurality of bit lines which extend in a second direction intersecting with a first direction; and a cache latch circuit including a plurality of cache latches which are coupled to the plurality of page buffers. The plurality of cache latches have a two-dimensional arrangement in the first direction and the second direction. Among the plurality of cache latches, an even cache latch and an odd cache latch which share a data line and an inverted data line are disposed adjacent to each other in the first direction.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: August 24, 2021
    Assignee: SK hynix Inc.
    Inventors: Sung Lae Oh, Dong Hyuk Kim, Tae Sung Park, Soo Nam Jung
  • Patent number: 11099961
    Abstract: A method may include, in a host information handling system configured to be inserted into a chassis providing a common hardware infrastructure to a plurality of modular information handling systems including the information handling system: (i) determining a runtime health status of a persistent memory subsystem of the host information handling system; and (ii) communicating a health status indicator indicative of the runtime health status to a management module configured to manage the common hardware infrastructure.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: August 24, 2021
    Assignee: Dell Products L.P.
    Inventors: Doug E. Messick, Aaron M. Rhinehart
  • Patent number: 11093282
    Abstract: A non-limiting example of a computer-implemented method for file register writes using pointers includes, responsive to a dispatch instruction, storing, at a location in a history buffer, an instruction tag and first data associated with the instruction tag. The method further includes storing a pointer in an issue queue. The pointer points to the location in the history buffer. The method further includes performing a write back of second data using the pointer stored in the issue queue. The write back writes the second data into the location of the history buffer associated with the pointer.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: August 17, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Brian D. Barrick, Steven J. Battle, Joshua W. Bowman, Cliff Kucharski, Hung Q. Le, Dung Q. Nguyen, David R. Terry
  • Patent number: 11093323
    Abstract: Techniques are disclosed for reducing the time required to read and write data to memory. Data reads and/or writes can be delayed when error correction code (ECC) bits, which are used to detect and/or correct data corruption, are written to memory. Writing ECC bits can take longer in some instances than writing data bits because an ECC write may involve a read/modify/write operation, as opposed to just simply writing the bits to memory. Some latencies associated with writing ECC bits can be hidden by interleaving ECC writes with data writes. However, if insufficient data writes are available for interleaving, hiding such latencies become difficult. Thus, various techniques are disclosed, for example, where ECC writes are deferred until a sufficient number of data writes become available for interleaving. By interleaving ECC writes, the disclosed techniques decrease the overall time required to read and write data to memory.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: August 17, 2021
    Assignee: NVIDIA Corporation
    Inventors: Ashutosh Pandey, Jay Gupta, Kaushal Agarwal, Justin Bennett, Srinivas Santosh Kumar Madugula
  • Patent number: 11093342
    Abstract: The present disclosure describes a technique for performing an efficient deduplication of compressed source data. The techniques may reduce the required storage footprint required for deduplication of compressed data. In order to reduce the storage size required, the system may perform additional decompression/recompression processes by identifying particular compression algorithms used by a source storage system. Once the compression algorithm is identified, the system may initiate decompression and then perform fingerprint analysis of the segment in the file of the uncompressed data. When a recovery process is initiated, the system may recompress the deduplicated data using the same compression algorithm used by the source storage system. Accordingly, the data recovery process may be performed in manner in which the client device receives restored data as expected and in the original compression format.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: August 17, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Jerrold Heyman, Benjamin Whetstone, Robert Fair
  • Patent number: 11089103
    Abstract: Technology for content distribution in wireless mesh network is described. In one embodiment, the wireless mesh network includes a plurality of mesh nodes, wherein each of the mesh nodes includes a content agent configured to receive a content command from a cloud computing content management service communicatively coupled to the wireless mesh network, the content command identifying one or more segments of a media content file corresponding to a media title to be stored on the mesh node. Each mesh node includes a storage system configured to store the one or more segments of the media content file specified in the content command. Each mesh node further includes a content server configured to service requests for playback of the media title from one or more mesh clients. The mesh nodes also include a mesh communication component configured to communicate with other mesh nodes in the wireless mesh network to retrieve segments of the media content file stored on the other nodes.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: August 10, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Ishwardutt Parulkar, Kaixiang Hu, Kiran Kumar Edara, Yingshuo Chen, Nikhil Dinkar Joshi, Joshua Aaron Karsh
  • Patent number: 11086547
    Abstract: Content is captured and archived at an archive center (AC) and, depending upon records management (RM) policy, is managed by the AC or under RM control by a content server (CS). Both the AC and CS may be part of an enterprise content management system. The AC provides a user-friendly interface through which retention zones may be defined, and functionality for applying RM policy. The functionality can be triggered via a specific content property or through a retention zone under RM control. The RM control can be turned on or off from within the AC using the user-friendly interface. Archived content is not moved or duplicated. Rather, metadata and a link to the storage location are sent to the CS which, in turn, creates a content server document that is linked to the archived content. Only a portion of archived content is exposed to the CS through the AC.
    Type: Grant
    Filed: November 4, 2016
    Date of Patent: August 10, 2021
    Assignee: OPEN TEXT SA ULC
    Inventors: Thomas Bruckner, Matthias Specht, Nicholas Carter
  • Patent number: 11089101
    Abstract: A media content management device includes one or more memory devices storing instructions, and one or more processors configured to execute the instructions to perform steps of a method for providing management of media content. The device may receive media content from a data source and determine a set of media operations that can be performed by the device on a locally stored copy of the media content on the storage means or by a cloud storage system on a remotely stored copy. Based on whether the cloud storage system is reachable, a first media operation may be performed on the remotely stored copy of the media content or on the locally stored copy of the media content. The device may open a communication path with a user device and transmit a portion of the media content to the user device before uploading to the cloud storage system is complete.
    Type: Grant
    Filed: December 23, 2017
    Date of Patent: August 10, 2021
    Assignee: Western Digital Technologies, Inc.
    Inventors: Thomas Kistler, Christopher Hansen Bourdon, Dmitri Trembovestki, James Chia Ho Chou, Laurent Baumann
  • Patent number: 11080299
    Abstract: Methods, apparatus, systems, and articles of manufacture to partition a database are disclosed. An example apparatus includes a variant identifier to identify a variant of unstructured data included in a query. The variant identifier is to identify a size of the identified variant, the query including unstructured data to be written to a database. A partition manager is to select a partition into which data is to be written based on the size of the identified variant. A partition creator is to, in response to the selected partition not existing in the database, create the selected partition. A data writer to write the data to the selected partition.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: August 3, 2021
    Assignee: McAfee, LLC
    Inventors: Brian Howard Stewart, Brian Roland Rhees, Seth D. Grover
  • Patent number: 11080810
    Abstract: By predicting future memory subsystem request behavior based on live memory subsystem usage history collection, a preferred setting for handling predicted upcoming request behavior may be generated and used to dynamically reconfigure the memory subsystem. This mechanism can be done continuously and in real time during to ensure active tracking of system behavior.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: August 3, 2021
    Assignee: Intel Corporation
    Inventors: Wenyin Fu, Abhishek R. Appu, Bhushan M. Borole, Altug Koker, Nikos Kaburlasos, Kamal Sinha
  • Patent number: 11082209
    Abstract: A system includes a campaign management service to detect a campaign initiation request indicating a number of computerized devices to be updated for a campaign and store data corresponding to the computerized devices to be updated. The campaign management service can generate a filter data structure comprising hash values based on the data for each of the computerized devices to be updated and transmit the filter data structure to a network edge. The system can include the network edge that can use the filter data structure from the campaign management service to determine whether a computerized device is to obtain a device update from the campaign management service. The network edge can retrieve the device update and modify the computerized device by transmitting the device update to the computerized device, which then installs it.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: August 3, 2021
    Assignee: INTEGRITY SECURITY SERVICES LLC
    Inventor: Neil Locketz
  • Patent number: 11074017
    Abstract: Disclosed herein are methods, systems, and apparatus, including computer programs encoded on computer storage devices, for data processing. One of the methods includes maintaining, by a storage system, a plurality of storage devices that include at least a first tier storage device and a second tier storage device. The storage system receives a write request of a ledger data, determines whether a type of the ledger data is block data, and, in response to determining that the type of the ledger data is block data, writes the ledger data into the second tier storage device.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: July 27, 2021
    Assignee: Advanced New Technologies Co., Ltd.
    Inventor: Shikun Tian
  • Patent number: 11074190
    Abstract: A prefetch unit generates a prefetch address in response to an address associated with a memory read request received from the first or second cache. The prefetch unit includes a prefetch buffer that is arranged to store the prefetch address in an address buffer of a selected slot of the prefetch buffer, where each slot of the prefetch unit includes a buffer for storing a prefetch address, and two sub-slots. Each sub-slot includes a data buffer for storing data that is prefetched using the prefetch address stored in the slot, and one of the two sub-slots of the slot is selected in response to a portion of the generated prefetch address. Subsequent hits on the prefetcher result in returning prefetched data to the requestor in response to a subsequent memory read request received after the initial received memory read request.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: July 27, 2021
    Assignee: Texas Instruments Incorporated
    Inventors: Kai Chirca, Joseph R. M. Zbiciak, Matthew D. Pierson