Caching Patents (Class 711/113)
  • Patent number: 10936451
    Abstract: In a data storage system in which a first storage array and a second storage array maintain first and second replicas of a production volume, the first storage array is responsive to a write command from a host to send a notification to the second storage array indicating that the replicated production volume will be updated. The notification has information that enables the second storage array to implement pre-processing steps to prepare for subsequent receipt of data associated with the write command. Both storage arrays implement the pre-processing steps at least partly concurrently. When the data associated with the write command is subsequently received, the first storage array writes the data to cache and then sends a copy of the data to the second storage array, i.e. in series. The second storage array then writes the data to cache. Elapsed time between receipt of the write command and returning an acknowledgment to the host may be improved by concurrent pre-processing.
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: March 2, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Toufic Tannous, Bhaskar Bora, Deepak Vokaliga
  • Patent number: 10929051
    Abstract: A method includes obtaining, by a computing entity of a multi-cloud dispersed storage network (DSN) system, a multi-cloud storage request to write a data object to the multi-cloud DSN system from a requester. The method further includes sending, by the computing entity, the multi-cloud storage request to a data director module. The method further includes determining a multi-cloud storage scheme to execute the multi-cloud storage request, executing the multi-cloud storage scheme to store the data object in a set of two or more cloud storage systems, generating an index regarding the storage of the data object, and notifying the requester of an estimated response time of the set of two or more cloud storage systems. The method further includes monitoring the performance information of the set of two or more cloud storage systems and data object usage information of the data object to determine a multi-cloud storage performance level.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: February 23, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Gregory R. Hintermeister
  • Patent number: 10922228
    Abstract: Systems and methods for accessing data stored in multiple locations. A cache and a storage system are associated with an index. Entries in the index identify locations of data in both the cache and the storage system. When an index lookup occurs and an entry in the index identifies at least two locations for the data, the locations are ordered based on at least one factor and the data stored in the optimal location as determined from the at least one factor is returned.
    Type: Grant
    Filed: March 31, 2015
    Date of Patent: February 16, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Grant R. Wallace, Philip N. Shilane, Mahesh Kamat
  • Patent number: 10915262
    Abstract: A hybrid storage device includes a first storage medium configured to store data at a first speed and a second storage medium configured to store data at a second speed. The first storage medium may be a NAND flash storage medium, and the second storage medium may be disc storage medium. Partitions of the first storage medium are associated with partitions of the second storage medium to form at least two storage tiers. Each of the storage tiers may include different NAND partition capacities. The storage device further includes a peer to peer communication channel between the first storage medium and the second storage medium for moving data between a NAND partition and HDD partition. The storage device is accessible via a dual port SAS or PCIe interface.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: February 9, 2021
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventors: Rajesh Maruti Bhagwat, Nitin S. Kabra, Nilesh Govande, Manish Sharma, Joe Paul Moolanmoozha, Alexander Carl Worrall
  • Patent number: 10909118
    Abstract: Cache optimization for missing data is provided. A database system receives a first request for a database record. The database system determines whether the database record is stored in a cache. The database system determines whether the database record is stored in a data store in response to a determination that the database record is not stored in the cache. The database system stores a dummy entry for the database record in the cache in response to a determination that the database record is not stored in the data store. The database system receives a second request for the database record. The database system determines whether the database record is stored in the cache. The database system outputs an indication that the database record is unavailable in response to a determination that the dummy entry stored for the database record is in the cache.
    Type: Grant
    Filed: February 4, 2016
    Date of Patent: February 2, 2021
    Assignee: salesforce.com, inc.
    Inventors: Pallavi Savla, Gurdeep Singh Sandle, George Vitchev, Prabhjot Singh, Steven Marshall Cohen
  • Patent number: 10902126
    Abstract: Provided are a computer program product, system, and method for verification of a boot loader program at a control unit to be provided to a host system to load an operating system. A stored value is generated from a cryptographic function applied to portions of a boot loader program stored in the storage. The boot loader program is read from the storage in response to execution of a boot loader request from the host system. The cryptographic function is applied to at least a portion of the read boot loader program to produce a calculated value. The host system is provided access to the boot loader program to use to load the operating system from the storage into the host system in response to the calculated value matching the stored value.
    Type: Grant
    Filed: March 10, 2017
    Date of Patent: January 26, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Peter G. Sutton, Harry M. Yudenfriend
  • Patent number: 10896162
    Abstract: Systems and methods to manage database data are provided. A particular method includes automatically identifying a plurality of storage devices. The storage devices include a first device of a first type and a second device of a second type. The first type includes a solid state memory device. The method may further identify a high priority data set of the database. A rebalancing operation is conducted that includes moving the high priority data set to the solid state memory device and substantially evening distribution of other data of the database among the storage devices.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: January 19, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Harshwardhan S. Mulay, Abhinay R. Nagpal, Sandeep Ramesh Patil, Yan Wang Stein
  • Patent number: 10891233
    Abstract: Systems, apparatuses and methods may provide for technology to automatically identify a plurality of non-volatile memory locations associated with a file in response to a close operation with respect to the file and automatically conduct a prefetch from one or more of the plurality of non-volatile memory locations that have been most recently accessed and do not reference cached file segments. The prefetch may be conducted in response to an open operation with respect to the file and on a per-file segment basis.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: January 12, 2021
    Assignee: Intel Corporation
    Inventors: Scott Burridge, William Chiu, Jawad Khan, Sanjeev Trika
  • Patent number: 10891145
    Abstract: Systems and methods are described for transforming a data set within a data source into a series of task calls to an on-demand code execution environment or other distributed code execution environment. Such environments utilize pre-initialized virtual machine instances to enable execution of user-specified code in a rapid manner, without delays typically caused by initialization of the virtual machine instances, and are often used to process data in near-real time, as it is created. However, limitations in computing resources may inhibit a user from utilizing an on-demand code execution environment to simultaneously process a large, existing data set. The present application provides a task generation system that can iteratively retrieve data items from an existing data set and generate corresponding task calls to the on-demand computing environment, while ensuring that at least one task call for each data item within the existing data set is made.
    Type: Grant
    Filed: March 30, 2016
    Date of Patent: January 12, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Timothy Allen Wagner, Marc John Brooker, Ajay Nair
  • Patent number: 10884820
    Abstract: Various systems and methods are provided for receiving replication data at a recovery site from a replication process initiated on a primary site, where the recovery site includes at least a first gateway appliance and a second gateway appliance that can be used to process the replication data. The systems and methods further involve evaluating a replication load of the first gateway appliance, which includes analyzing at least a first evaluation factor and a second evaluation factor related to the replication process, and in response to evaluating the evaluation factors, determining whether the first gateway appliance is overloaded. In response to determining that the first gateway appliance is overloaded, rebalancing a replication workload between the first gateway appliance and the second gateway appliance.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: January 5, 2021
    Assignee: Veritas Technologies LLC
    Inventors: Pramila Dhaka, Parikshit Hooda
  • Patent number: 10873462
    Abstract: A method of performing a computation by an untrusted entity includes: storing a state of the computation at a plurality of points of the computation; generating a plurality of hashes based on the state of the computation at points of the computation; generating a hash tree including a plurality of leaf nodes corresponding to the plurality of hashes of states of the computation and further wherein internal tree nodes are derived as the hash of at least two child nodes; creating at least one pair of paths from a root of the hash tree to the leaf nodes corresponding to the plurality of hashes of states of the computation, selecting the point in the computation corresponding to the leaf node of a created path, along with a succeeding point in the computation; and transmitting a proof of the computation comprising the at one path of the hash tree and siblings of the path to one or more third party entities for verification.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: December 22, 2020
    Inventors: Volkmar Frinken, Guha Jayachandran
  • Patent number: 10860255
    Abstract: A system, method and apparatus directed to fast data storage on a block storage device. New data is written to an empty write block. If the new data is compressible, a compressed version of the new is written into the meta data. A location of the new data is tracked. Meta data associated with the new data is written. A lookup table may be updated based in part on the meta data. The new data may be read based the lookup table configured to map a logical address to a physical address. Disk operations may use state data associated with the meta data to determine the empty write block. A write speed-limit may also be determined based on a lifetime period, a number of life cycles and a device-erase-sector-count for the device. A write speed for the device may be slowed based on the determined write speed-limit.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: December 8, 2020
    Inventors: Douglas Dumitru, Samuel J. Anderson
  • Patent number: 10860493
    Abstract: A method and an apparatus for data storage system are provided. The method comprises: receiving an I/O request from an upper layer, the I/O request including an I/O type identifier; determining an I/O type of the I/O request based on the I/O type identifier; and processing the I/O request based on the determined I/O type. The present disclosure also provides a corresponding apparatus. The method and the apparatus according to the present disclosure can determine a storage policy of corresponding data based on different I/O types to improve the overall system performance.
    Type: Grant
    Filed: September 19, 2016
    Date of Patent: December 8, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Xinlei Xu, Liam Xiongcheng Li, Jian Gao, Lifeng Yang, Ruiyong Jia
  • Patent number: 10853252
    Abstract: In a hybrid storage array that implements hierarchical storage tiering the eviction of host application data from cache is coordinated with promotion and demotion of host application data between hierarchical storage tiers. Optimal distribution of read cache size per different storage objects may be determined based on cache miss cost. The cost or benefit of promotion and demotion may be determined based on read cache hits and misses.
    Type: Grant
    Filed: January 30, 2019
    Date of Patent: December 1, 2020
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventor: Nickolay Dalmatov
  • Patent number: 10852965
    Abstract: A storage system comprising a plurality of storage devices and an associated storage controller. The plurality of storage devices are configured to store data blocks distributed across the plurality of storage devices in a plurality of data stripes. The plurality of data stripes comprise a first set of data stripes and a second set of data stripes. The storage controller is configured to receive data associated with at least one input-output request and to store the received data sequentially in at least one data stripe of the first set of data stripes. The controller is further configured to determine whether or not an amount of data stored in the first set of data stripes is greater than a threshold amount of data and in response to determining that the amount of data stored in the first set of data stripes is greater than the threshold amount of data, to destage the at least one data stripe of the first set of data stripes to the second set of data stripes.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: December 1, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Boris Glimcher, Zvi Schneider, Amitai Alkalay, Kirill Shoikhet
  • Patent number: 10853254
    Abstract: There are provided a memory controller and a memory system having the same. A memory controller includes: a command queue for queuing commands and outputting command information including Information of a previous command and a subsequent command; a command detector for outputting a detection signal according to the command information; and a command generator for generating the command and outputting a management command for managing a last command immediately following the previous command in response to the detection signal.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: December 1, 2020
    Assignee: SK hynix Inc.
    Inventor: Jeen Park
  • Patent number: 10853339
    Abstract: A method of negotiating memory record ownership between network nodes, comprising: storing in a memory of a first network node a subset of a plurality of memory records and one of a plurality of file system segments of a file system mapping the memory records; receiving a request from a second network node to access a memory record of the memory records subset; identifying the memory record by using the file system segment; deciding, by a placement algorithm, whether to relocate the memory record, from the memory records subset to a second subset of the plurality of memory records stored in a memory of the second network node; when a relocation is not decided, providing remote access of the memory record via a network to the second network node; and when a relocation is decided, relocating the memory record via the network for management by the second network node.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: December 1, 2020
    Assignee: NetApp Inc.
    Inventor: Amit Golander
  • Patent number: 10838763
    Abstract: A network interface device has an input configured to receive data from a network. The data is for one of a plurality of different applications. The network interface device also has at least one processor configured to determine which of a plurality of available different caches in a host system the data is to be injected by accessing to a receive queue comprising at least one descriptor indicating a cache location in one of said plurality of caches to which data is to be injected, wherein said at least one descriptor, which indicates the cache location, has an effect on subsequent descriptors of said receive queue until a next descriptor indicates another cache location. The at least one processor is also configured to cause the data to be injected to the cache location in the host system.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: November 17, 2020
    Assignee: Xilinx, Inc.
    Inventors: Steven Leslie Pope, David James Riddoch
  • Patent number: 10824673
    Abstract: A system includes a non-volatile random access memory storing a column store main fragment of a column of a database table, and a processing unit to read the column store main fragment from the non-volatile random access memory. A volatile random access memory storing a column store delta fragment of the column of the database table may also be included, in which the processing unit is to write to the column store delta fragment. According to some systems, the stored column store main fragment is byte-addressable, and is copied from the volatile random access memory to the non-volatile random access memory without using a filesystem cache.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: November 3, 2020
    Assignee: SAP SE
    Inventors: Oliver Rebholz, Ivan Schreter, Abdelkader Sellami, Daniel Booss, Gunter Radestock, Peter Bumbulis, Alexander Boehm, Frank Renkes, Werner Thesing, Thomas Willhalm
  • Patent number: 10826992
    Abstract: A content management system for collecting files from one or more submitters in a collection folder. A collector, who generates the collection folder, can invite one or more submitters to submit one or more files to the collection folder via a customizable file request. The one or more submitters have limited rights to the collection folder. The limited rights can include uploading rights and prohibiting a submitter from viewing files that other submitters associated with the collection folder submitted. Thus, the collection folder is able to store files from the one or more submitters, but prevent them from viewing other's submissions.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: November 3, 2020
    Assignee: DROPBOX, INC.
    Inventors: Mindy Zhang, Pranav Piyush
  • Patent number: 10824555
    Abstract: A method for flash-aware heap memory management includes reserving a contiguous virtual space in a memory space of at least one process with a size equivalent to a size of a flash-based byte addressable device. The method also includes partitioning by a host device the memory space of the flash-based byte addressable device into multiple chunks. Each chunk includes multiple logical segments. The host device receives a memory allocation request from a thread associated with an application. The host device determines at least one chunk from the multiple chunks, including a least free logical segment compared to the other chunks from the multiple chunks. The host device allocates to the thread at least one chunk that includes the least free logical segment.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: November 3, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Vishak Guddekoppa, Arun George, Mitesh Sanjay Mutha, Rakesh Nadig
  • Patent number: 10817429
    Abstract: A method, computer program product, and computing system for freeing up cache space includes identifying a portion of cache space for removal from a cache system, thus defining a cache portion to be removed, and ceasing to promote the cache portion to be removed. Data that needs to be relocated within the cache portion to be removed is identified, thus identifying flushable data. The flushable data is relocated to a backend storage system associated with the cache portion to be removed.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: October 27, 2020
    Assignee: EMC IP Holding Company, LLC
    Inventors: Xinlei Xu, Xiongcheng Li, John V. Harvey, Lifeng Yang, Jian Gao
  • Patent number: 10817224
    Abstract: Systems, methods, and computer programs are disclosed for scheduling decompression of an application from flash storage. One embodiment of a system comprises a flash memory device and a preemptive decompression scheduler component. The preemptive decompression scheduler component comprises logic configured to generate and store metadata defining one or more dependent objects associated with the compressed application in response to an application installer component installing a compressed application to the flash memory device. In response to a launch of the compressed application by an application launcher component, the preemptive decompression scheduler component determines from the stored metadata the one or more dependent objects associated with the compressed application to be launched. The preemptive decompression scheduler component preemptively schedules decompression of the one or more dependent objects based on the stored metadata.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: October 27, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Subrato Kumar De, Dexter Chun, Yanru Li
  • Patent number: 10812111
    Abstract: A semiconductor apparatus includes a storage unit, an ECC decoder, and a selection unit. The storage unit stores data. The ECC decoder can detect and correct an error of a predetermined number of bits in data outputted from the storage unit, and can detect an error equal to or larger than bits larger than the predetermined number of bits in the data. The selection unit selects and outputs one of the data outputted from the ECC decoder and a preset fixed value, in accordance with a detection signal indicating whether or not the error equal to or larger than the bits larger than the predetermined number of bits is detected by the ECC decoder.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: October 20, 2020
    Assignees: Kabushiki Kaisha Toshiba, Toshiba Electronic Devices & Storage Corporation
    Inventor: Keisyun Lin
  • Patent number: 10811176
    Abstract: A dust core includes a metal magnetic material, a resin, an insulation film, and an intermediate layer. The insulation film covers the metal magnetic material. The intermediate layer exists between the insulation film and the metal magnetic material and contacts therebetween. The metal magnetic material includes 85 to 99.5 wt % of Fe, 0.5 to 10 wt % of Si, and 0 to 5 wt % of other elements, with respect to 100 wt % of the entire metal magnetic material. The intermediate layer includes a Fe—Si—O based oxide. The insulation film includes a Si—O based oxide.
    Type: Grant
    Filed: March 7, 2018
    Date of Patent: October 20, 2020
    Assignee: TDK CORPORATION
    Inventors: Yousuke Futamata, Ryoma Nakazawa, Takeshi Takahashi, Junichi Shimamura
  • Patent number: 10810127
    Abstract: Solid-state drives (SSD) and a data access method for SSD are provided. The method includes the following. Cache acquired data-to-be-written to a preset write cache module. Rank the data-to-be-written in the write cache module according to a least recently used page (LRU) algorithm. When data storage amount of the write cache module reaches a preset value, determine a preset number of replacement data among the infrequently used data-to-be-written according to a preset cache replacement algorithm. Write the replacement data into a flash memory of the SSD. Implementations of the present disclosure can decrease effectively the number of times of rewriting on a flash memory of the SSD, thereby reducing effectively a write amplifying problem of the SSD during data access.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: October 20, 2020
    Assignee: SHENZHEN DAPU MICROELECTRONICS CO., LTD.
    Inventors: Haibo He, Qing Yang
  • Patent number: 10810130
    Abstract: A cache memory device includes: data memory that stores cache data corresponding to data in main memory; tag memory that stores tag information to identify the cache data; an address estimation unit that estimates a look-ahead address to be accessed next; a cache hit determination unit that performs cache hit determination on the look-ahead address, based on the stored tag information; and an access controller that accesses the data memory or the main memory based on the retained cache hit determination result in response to a next access.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: October 20, 2020
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventor: Tatsuhiro Tachibana
  • Patent number: 10795602
    Abstract: A computer-implemented method according to one embodiment includes, for each portion of data in a write cache: determining whether a given portion of data was added to the write cache prior to completion of a most recent flash copy operation. In response to determining that the given portion of data was not added to the write cache prior to completion of a most recent flash copy operation, a determination is made of whether the given portion of data has a clock bit value corresponding thereto. In response to determining that the given portion of data does not have a clock bit value corresponding thereto, a clock bit value calculated for the given portion of data based on a current amount of unused storage capacity in the write cache. Moreover, in response to determining that the given portion of data has a clock bit value corresponding thereto, it is decremented.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: October 6, 2020
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin John Ash, Matthew G. Borlick
  • Patent number: 10795708
    Abstract: A processing device in a host computer system receives an instruction to write data to a storage device coupled to the host computer system and store a copy of the data in a cache of the host computer system. The processing device initiates a write operation to write the data from the cache to the storage device and detects that the storage device is disconnected from the host computer system during execution of the write operation. In response to detecting that the storage device is disconnected, the processing device may suspend execution of at least one of a virtual machine or a process that issued the first instruction. After determining that the storage device is reconnected to the host computer system, the processing device can resumes the write operation to continue writing the data from the cache to the storage device.
    Type: Grant
    Filed: November 10, 2016
    Date of Patent: October 6, 2020
    Assignee: PARALLELS INTERNATIONAL GMBH
    Inventors: Alexander Grechishkin, Konstantin Ozerkov, Alexey Koryakin, Nikolay Dobrovolskiy, Serguei Beloussov
  • Patent number: 10795610
    Abstract: A read request from a host system can be received. It can be detected that the read request is associated with a pattern of read requests. A requested transfer size associated with the read request can be identified. A size of data to retrieve can be determined. The size of the data can be based on the requested transfer size and a die-level transfer size associated with a die of a memory system.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: October 6, 2020
    Assignee: Micron Technology, Inc.
    Inventor: Cory M. Steinmetz
  • Patent number: 10789176
    Abstract: Technologies for least recently used (LRU) cache replacement include a computing device with a processor with vector instruction support. The computing device retrieves a bucket of an associative cache from memory that includes multiple entries arranged from front to back. The bucket may be a 256-bit array including eight 32-bit entries. For lookups, a matching entry is located at a position in the bucket. The computing device executes a vector permutation processor instruction that moves the matching entry to the front of the bucket while preserving the order of other entries of the bucket. For insertion, an inserted entry is written at the back of the bucket. The computing device executes a vector permutation processor instruction that moves the inserted entry to the front of the bucket while preserving the order of other entries. The permuted bucket is stored to the memory. Other embodiments are described and claimed.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: September 29, 2020
    Assignee: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Tsung-Yuan Tai, Cristian Florin Dumitrescu, Xiangyang Guo
  • Patent number: 10789170
    Abstract: Various techniques are directed to a storage management method, an electronic device and a computer readable medium. Such techniques may involve: receiving a request for a target storage block in a disk; obtaining, from a cache, a cache indicator indicating a state of a group of storage blocks including the target storage block, the number of bits occupied by the cache indicator in the cache being less than the number of storage blocks in the group of storage blocks; and responding to the request based on the cache indicator. Such techniques can reduce times of access to the disk and thereby enhancing input/output performance.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: September 29, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Jianbin Kang, Hongpo Gao, Jian Gao, Lei Sun, Xiongcheng Li, Sheng Wang
  • Patent number: 10783121
    Abstract: A method and system for an optimized transfer of data files to a cloud storage service (CSS) are presented. The method comprises dividing a data file into a plurality of data blocks; assigning a block code to each of the plurality of data blocks; generating, based on a contract with the CSS, a first list of block codes from the plurality of data blocks, wherein the contract defines at least data blocks guaranteed to exist in the CSS; querying the CSS with the first list of block codes; responsive of the query, receiving a second list of block codes from the CSS, wherein the second list of block codes includes block codes of data blocks designated in the first list of block codes but missing in the CSS; and transmitting to the CSS data blocks designated by their block codes in the second list.
    Type: Grant
    Filed: March 31, 2015
    Date of Patent: September 22, 2020
    Assignee: CTERA NETWORKS, LTD.
    Inventor: Aron Brand
  • Patent number: 10776267
    Abstract: Mirrored byte addressable storage is disclosed. For example, first and second persistent memories store first and second pluralities of pages, both associated with a plurality of page states in a mirror state log in a third persistent memory. A mirror engine executing on a processor with a processor cache detects a write fault associated with the first page of the first plurality of pages and in response, updates a first page state to a dirty-nosync state. A notice of a flush operation of the processor cache associated with first data is received. The first data becomes persistent in the first page of the first plurality of pages after the flush operation; then the first page state is updated to a clean-nosync state. The first data is then copied to the first page of the second plurality of pages; then the first page state is updated to a clean-sync state.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: September 15, 2020
    Assignee: Red Hat, Inc.
    Inventors: Jeffrey E. Moyer, Vivek Goyal
  • Patent number: 10778597
    Abstract: A multi-cloud orchestration system includes a computer executed set of instructions that communicates with multiple computing clouds and/or computing clusters each having one or more resources for executing an application. The instructions are executed to receive information associated with an application, allocate a resource pool to be used for executing the application in which the resource pool including at least one resource from each of the computing clouds and/or computing clusters. The instructions may be further executed to provision the resources to execute the application.
    Type: Grant
    Filed: May 21, 2015
    Date of Patent: September 15, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Michael Tan, Akshaya Mahapatra, Peng Liu, Gilbert Lau
  • Patent number: 10768838
    Abstract: When a logical capacity of a nonvolatile semiconductor memory is increased, after a logical capacity which is allocated to a RAID group but unused is released, the RAID group is reconfigured to include the released logical capacity and the increased logical capacity. When the logical capacity of the nonvolatile semiconductor memory is reduced, after the reduced logical capacity is released from the RAID group, the RAID group is reconfigured with the released logical capacity.
    Type: Grant
    Filed: January 12, 2017
    Date of Patent: September 8, 2020
    Assignee: HITACHI, LTD.
    Inventors: Shimpei Nomura, Masahiro Tsuruya, Akifumi Suzuki
  • Patent number: 10769112
    Abstract: The present invention discloses a method for deduplication of a file, a computer program product, and an apparatus thereof. In the method, the file is partitioned into at least one composite block, wherein the composite block includes a fixed-size block and a variable-size block, the variable-size block being determined based on content of the file. Then a deduplication operation is performed on the at least one composite block.
    Type: Grant
    Filed: May 29, 2015
    Date of Patent: September 8, 2020
    Assignee: International Business Machines Corporation
    Inventor: Guo Feng Zhu
  • Patent number: 10761990
    Abstract: Embodiments of the present disclosure relate to methods and devices for managing cache. The method comprises in response to receiving a read request, determining whether data associated with the read request is present in a first cache, the first cache being a read-only cache. The method also comprises in response to a miss of the data in the first cache, determining whether the data is present in a second cache, the second cache being a readable and writable cache. The method further comprises: in response to hitting the data in the second cache, returning the data as a response to the read request; and reading the data into the first cache.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: September 1, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Xinlei Xu, Jian Gao, Ruiyong Jia, Liam Xiongcheng Li, Jibing Dong
  • Patent number: 10761989
    Abstract: Embodiments of the present disclosure provide a method of storage management, a storage system and a computer program product. The method comprises determining whether a number of I/O requests for a first page in a disk of a storage system exceeds a first threshold. The method further comprises: in response to determining that the number exceeds the first threshold, caching data in the first page to a first cache of the storage system; and storing metadata associated with the first page in a Non-Volatile Dual-In-Line Memory Module (NVDIMM) of the storage system.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: September 1, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Jian Gao, Lifeng Yang, Xinlei Xu, Liam Xiongcheng Li
  • Patent number: 10761735
    Abstract: An embodiment of the invention provides a method comprising: permitting an application to be aware to be aware of a distribution of a data of the application across a cache and a permanent storage device. The cache comprises a solid state device and the permanent storage device comprises a disk or a memory. In yet another embodiment of the invention, an apparatus comprises: a caching application program interface configured to permit an application to be aware to be aware of a distribution of a data of the application across a cache and a permanent storage device. A caching application program interface is configured to determine an input/output strategy to consume the data based on the distribution of the data.
    Type: Grant
    Filed: December 4, 2018
    Date of Patent: September 1, 2020
    Assignee: PrimaryIO, Inc.
    Inventors: Sumit Kumar, Sumit Kapoor
  • Patent number: 10754692
    Abstract: There are provided a memory controller and an operating method thereof. The memory controller includes a host interface layer for receiving a host program request and a host read request, a flash translation layer for generating and outputting a program command and a plurality of program addresses in response to the host program request, checking a program progress state for a program address corresponding to a target read address when the target read address corresponding to the host read request is included in the program addresses, and controlling a read operation on the target read address according to whether a program operation on the program address corresponding to the target read address has been completed, and a flash interface layer for transmitting a command and addresses, which are output from the flash translation layer, to a memory device.
    Type: Grant
    Filed: December 3, 2018
    Date of Patent: August 25, 2020
    Assignee: SK hynix Inc.
    Inventor: Joo Young Lee
  • Patent number: 10754792
    Abstract: Example implementations relate to persistent virtual address spaces. In one example, persistent virtual address spaces can employ a non-transitory processor readable medium including instructions to receive a whole data structure of a virtual address space (VAS) associated with a process, where the whole data structure includes data and metadata of the VAS, and store the data and the metadata of the VAS in a non-volatile memory to form a persistent VAS (PVAS).
    Type: Grant
    Filed: January 29, 2016
    Date of Patent: August 25, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Izzat El Hajj, Alexander Merritt, Gerd Zellweger, Dejan S. Milojicic
  • Patent number: 10754730
    Abstract: Provided are a computer program product, system, and method for copying point-in-time data in a storage to a point-in-time copy data location in advance of destaging data to the storage. A point-in-time copy is created to maintain tracks in a source storage unit as of a point-in-time. A source copy data structure indicates tracks in the source storage unit to copy from the storage to a point-in-time data location. An update to write to a source track is received and a determination is made as to whether the source copy data structure indicates to copy the source track from the storage to the point-in-time data location. The update is written to a cache. A copy operation is initiated to copy the source track from the storage to the point-in-time data location asynchronous before the source track is destaged from the cache to the storage unit.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: August 25, 2020
    Assignee: International Business Machines Corporation
    Inventors: Theresa M. Brown, Kevin Lin, David Fei, Nedlaya Y. Francisco
  • Patent number: 10756979
    Abstract: Embodiments of the present invention provide a method and apparatus for performing cross-layer orchestration of resources in a data center having a multi-layer architecture. The method comprises: performing unified control of all resources in all layers of the data center; performing unified storage of all topologies and machine-generated data of all layers of the data center; and orchestrating the resources of the data center based on the unified control and the unified storage. Embodiments of the present invention provide a higher level orchestration than methods in the prior art, and employ some functions provided by methods in the prior art to provide a unified manner when the demand changes for orchestrating a layered cloud data center, in order to immediately provide a suitable capability.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: August 25, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Layne Lin Peng, Jie Bao, Grissom Tianqing Wang, Vivian Yun Zhang, Roby Qiyan Chen, Feng Golfen Guo, Kay Kai Yan, Yicang Wu
  • Patent number: 10754813
    Abstract: Methods, apparatus, and computer-accessible storage media for optimizing block storage I/O operations in a storage gateway. A write log may be implemented in a block store as a one-dimensional queue. A read cache may also be implemented in the block store. When non-ordered writes are received, sequential writes may be performed to the write log and the data may be written to contiguous locations on the storage. A metadata store may store metadata for the write log and the read cache. Reads may be satisfied from the write log if possible, or from the read cache or backend store if not. If blocks are read from the read cache or backend store to satisfy a read, the blocks may be mutated with data from the write log before being sent to the requesting process. The mutated blocks may be stored to the read cache.
    Type: Grant
    Filed: June 30, 2011
    Date of Patent: August 25, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: James Christopher Sorenson, III, Yun Lin, Satish Kumar Kotha, Ankur Khetrapal
  • Patent number: 10748874
    Abstract: A three-dimensional stacked integrated circuit (3D SIC) having a non-volatile memory die, a volatile memory die, a logic die, and a thermal management component. The non-volatile memory die, the volatile memory die, the logic die, and the thermal management component are stacked. The thermal management component can be stacked in between the non-volatile memory die and the logic die, stacked in between the volatile memory die and the logic die, or both.
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: August 18, 2020
    Assignee: Micron Technology, Inc.
    Inventor: Tony M. Brewer
  • Patent number: 10740261
    Abstract: A system and method for early data pipeline lookup in large cache design is provided. An embodiment of the disclosure includes searching one or more tag entries of a tag array for a tag portion of the memory access request and simultaneously with searching the tag array, searching a data work queue of a data array by comparing a set identifier portion of the memory access request with one or more data work queue entries stored in the data work queue, generating a pending work indicator indicating whether at least one data work queue entry exists in the data work queue that corresponds to the set identifier portion, and sending the memory access request to the data array or storing the memory access request in a side buffer associated with the tag array based on the pending work indicator and a search result of the tag array search.
    Type: Grant
    Filed: May 12, 2017
    Date of Patent: August 11, 2020
    Assignee: LG ELECTRONICS INC.
    Inventors: Arkadi Avrukin, Thomas Zou
  • Patent number: 10733105
    Abstract: According to some embodiments, a backup storage system receives a request from a client at a storage system for accessing data segments. For each of a first groups of the data segments requested that are stored in a solid state device (SSD) cache, the system requests a first batch job for each of the first groups to retrieve the first groups of the data segments from the SSD cache via a first set of input/output (IO) threads. For each of a second groups of the data segments requested that are not stored in the SSD cache, the system requests a second batch job for each of the second groups to retrieve the second groups of the data segments from storage units of the storage system via a second set of input/output (IO) threads. The system assembles received segments and returns them to the client altogether.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: August 4, 2020
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Satish Visvanathan, Rahul B. Ugale
  • Patent number: 10733185
    Abstract: A method for optimizing memory access for database operations is provided. The method may include identifying an access pattern associated with a database operation. The access pattern may include data required to perform the database operation. One or more memory pages may be generated based at least on the access pattern. The one or more memory pages may include at least a portion of the data required to perform the database operation. The one or more memory pages including at least the portion of the data required to perform the database operation may be stored in a main memory. The database operation may be performed by at least loading, from the main memory and into a cache, the one or more memory pages including at least the portion of the data required to perform the database operation. Related systems and articles of manufacture, including computer program products, are also provided.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: August 4, 2020
    Assignee: SAP SE
    Inventors: Georgios Psaropoulos, Thomas Legler, Norman May, Anastasia Ailamaki
  • Patent number: 10721300
    Abstract: A method and a system to optimize the transfer of data chunks between Source Devices and Destination Devices using a transfer administer is described, wherein the Source Devices, the Destination Devices and the transfer administrator are interspersed in a Collaborative Work Environment, and wherein the optimization is accomplished by performing radio frequency (RF) signal based handshakes between the Source Devices and the transfer administrator.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: July 21, 2020
    Assignee: ARC Document Solutions, LLC
    Inventors: Rahul Roy, Srinivasa Rao Mukkamala, Himadri Majumder, Dipali Bhattacharya