Caching Patents (Class 711/113)
  • Patent number: 10838763
    Abstract: A network interface device has an input configured to receive data from a network. The data is for one of a plurality of different applications. The network interface device also has at least one processor configured to determine which of a plurality of available different caches in a host system the data is to be injected by accessing to a receive queue comprising at least one descriptor indicating a cache location in one of said plurality of caches to which data is to be injected, wherein said at least one descriptor, which indicates the cache location, has an effect on subsequent descriptors of said receive queue until a next descriptor indicates another cache location. The at least one processor is also configured to cause the data to be injected to the cache location in the host system.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: November 17, 2020
    Assignee: Xilinx, Inc.
    Inventors: Steven Leslie Pope, David James Riddoch
  • Patent number: 10824555
    Abstract: A method for flash-aware heap memory management includes reserving a contiguous virtual space in a memory space of at least one process with a size equivalent to a size of a flash-based byte addressable device. The method also includes partitioning by a host device the memory space of the flash-based byte addressable device into multiple chunks. Each chunk includes multiple logical segments. The host device receives a memory allocation request from a thread associated with an application. The host device determines at least one chunk from the multiple chunks, including a least free logical segment compared to the other chunks from the multiple chunks. The host device allocates to the thread at least one chunk that includes the least free logical segment.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: November 3, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Vishak Guddekoppa, Arun George, Mitesh Sanjay Mutha, Rakesh Nadig
  • Patent number: 10824673
    Abstract: A system includes a non-volatile random access memory storing a column store main fragment of a column of a database table, and a processing unit to read the column store main fragment from the non-volatile random access memory. A volatile random access memory storing a column store delta fragment of the column of the database table may also be included, in which the processing unit is to write to the column store delta fragment. According to some systems, the stored column store main fragment is byte-addressable, and is copied from the volatile random access memory to the non-volatile random access memory without using a filesystem cache.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: November 3, 2020
    Assignee: SAP SE
    Inventors: Oliver Rebholz, Ivan Schreter, Abdelkader Sellami, Daniel Booss, Gunter Radestock, Peter Bumbulis, Alexander Boehm, Frank Renkes, Werner Thesing, Thomas Willhalm
  • Patent number: 10826992
    Abstract: A content management system for collecting files from one or more submitters in a collection folder. A collector, who generates the collection folder, can invite one or more submitters to submit one or more files to the collection folder via a customizable file request. The one or more submitters have limited rights to the collection folder. The limited rights can include uploading rights and prohibiting a submitter from viewing files that other submitters associated with the collection folder submitted. Thus, the collection folder is able to store files from the one or more submitters, but prevent them from viewing other's submissions.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: November 3, 2020
    Assignee: DROPBOX, INC.
    Inventors: Mindy Zhang, Pranav Piyush
  • Patent number: 10817429
    Abstract: A method, computer program product, and computing system for freeing up cache space includes identifying a portion of cache space for removal from a cache system, thus defining a cache portion to be removed, and ceasing to promote the cache portion to be removed. Data that needs to be relocated within the cache portion to be removed is identified, thus identifying flushable data. The flushable data is relocated to a backend storage system associated with the cache portion to be removed.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: October 27, 2020
    Assignee: EMC IP Holding Company, LLC
    Inventors: Xinlei Xu, Xiongcheng Li, John V. Harvey, Lifeng Yang, Jian Gao
  • Patent number: 10817224
    Abstract: Systems, methods, and computer programs are disclosed for scheduling decompression of an application from flash storage. One embodiment of a system comprises a flash memory device and a preemptive decompression scheduler component. The preemptive decompression scheduler component comprises logic configured to generate and store metadata defining one or more dependent objects associated with the compressed application in response to an application installer component installing a compressed application to the flash memory device. In response to a launch of the compressed application by an application launcher component, the preemptive decompression scheduler component determines from the stored metadata the one or more dependent objects associated with the compressed application to be launched. The preemptive decompression scheduler component preemptively schedules decompression of the one or more dependent objects based on the stored metadata.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: October 27, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Subrato Kumar De, Dexter Chun, Yanru Li
  • Patent number: 10810130
    Abstract: A cache memory device includes: data memory that stores cache data corresponding to data in main memory; tag memory that stores tag information to identify the cache data; an address estimation unit that estimates a look-ahead address to be accessed next; a cache hit determination unit that performs cache hit determination on the look-ahead address, based on the stored tag information; and an access controller that accesses the data memory or the main memory based on the retained cache hit determination result in response to a next access.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: October 20, 2020
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventor: Tatsuhiro Tachibana
  • Patent number: 10811176
    Abstract: A dust core includes a metal magnetic material, a resin, an insulation film, and an intermediate layer. The insulation film covers the metal magnetic material. The intermediate layer exists between the insulation film and the metal magnetic material and contacts therebetween. The metal magnetic material includes 85 to 99.5 wt % of Fe, 0.5 to 10 wt % of Si, and 0 to 5 wt % of other elements, with respect to 100 wt % of the entire metal magnetic material. The intermediate layer includes a Fe—Si—O based oxide. The insulation film includes a Si—O based oxide.
    Type: Grant
    Filed: March 7, 2018
    Date of Patent: October 20, 2020
    Assignee: TDK CORPORATION
    Inventors: Yousuke Futamata, Ryoma Nakazawa, Takeshi Takahashi, Junichi Shimamura
  • Patent number: 10812111
    Abstract: A semiconductor apparatus includes a storage unit, an ECC decoder, and a selection unit. The storage unit stores data. The ECC decoder can detect and correct an error of a predetermined number of bits in data outputted from the storage unit, and can detect an error equal to or larger than bits larger than the predetermined number of bits in the data. The selection unit selects and outputs one of the data outputted from the ECC decoder and a preset fixed value, in accordance with a detection signal indicating whether or not the error equal to or larger than the bits larger than the predetermined number of bits is detected by the ECC decoder.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: October 20, 2020
    Assignees: Kabushiki Kaisha Toshiba, Toshiba Electronic Devices & Storage Corporation
    Inventor: Keisyun Lin
  • Patent number: 10810127
    Abstract: Solid-state drives (SSD) and a data access method for SSD are provided. The method includes the following. Cache acquired data-to-be-written to a preset write cache module. Rank the data-to-be-written in the write cache module according to a least recently used page (LRU) algorithm. When data storage amount of the write cache module reaches a preset value, determine a preset number of replacement data among the infrequently used data-to-be-written according to a preset cache replacement algorithm. Write the replacement data into a flash memory of the SSD. Implementations of the present disclosure can decrease effectively the number of times of rewriting on a flash memory of the SSD, thereby reducing effectively a write amplifying problem of the SSD during data access.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: October 20, 2020
    Assignee: SHENZHEN DAPU MICROELECTRONICS CO., LTD.
    Inventors: Haibo He, Qing Yang
  • Patent number: 10795610
    Abstract: A read request from a host system can be received. It can be detected that the read request is associated with a pattern of read requests. A requested transfer size associated with the read request can be identified. A size of data to retrieve can be determined. The size of the data can be based on the requested transfer size and a die-level transfer size associated with a die of a memory system.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: October 6, 2020
    Assignee: Micron Technology, Inc.
    Inventor: Cory M. Steinmetz
  • Patent number: 10795602
    Abstract: A computer-implemented method according to one embodiment includes, for each portion of data in a write cache: determining whether a given portion of data was added to the write cache prior to completion of a most recent flash copy operation. In response to determining that the given portion of data was not added to the write cache prior to completion of a most recent flash copy operation, a determination is made of whether the given portion of data has a clock bit value corresponding thereto. In response to determining that the given portion of data does not have a clock bit value corresponding thereto, a clock bit value calculated for the given portion of data based on a current amount of unused storage capacity in the write cache. Moreover, in response to determining that the given portion of data has a clock bit value corresponding thereto, it is decremented.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: October 6, 2020
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin John Ash, Matthew G. Borlick
  • Patent number: 10795708
    Abstract: A processing device in a host computer system receives an instruction to write data to a storage device coupled to the host computer system and store a copy of the data in a cache of the host computer system. The processing device initiates a write operation to write the data from the cache to the storage device and detects that the storage device is disconnected from the host computer system during execution of the write operation. In response to detecting that the storage device is disconnected, the processing device may suspend execution of at least one of a virtual machine or a process that issued the first instruction. After determining that the storage device is reconnected to the host computer system, the processing device can resumes the write operation to continue writing the data from the cache to the storage device.
    Type: Grant
    Filed: November 10, 2016
    Date of Patent: October 6, 2020
    Assignee: PARALLELS INTERNATIONAL GMBH
    Inventors: Alexander Grechishkin, Konstantin Ozerkov, Alexey Koryakin, Nikolay Dobrovolskiy, Serguei Beloussov
  • Patent number: 10789176
    Abstract: Technologies for least recently used (LRU) cache replacement include a computing device with a processor with vector instruction support. The computing device retrieves a bucket of an associative cache from memory that includes multiple entries arranged from front to back. The bucket may be a 256-bit array including eight 32-bit entries. For lookups, a matching entry is located at a position in the bucket. The computing device executes a vector permutation processor instruction that moves the matching entry to the front of the bucket while preserving the order of other entries of the bucket. For insertion, an inserted entry is written at the back of the bucket. The computing device executes a vector permutation processor instruction that moves the inserted entry to the front of the bucket while preserving the order of other entries. The permuted bucket is stored to the memory. Other embodiments are described and claimed.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: September 29, 2020
    Assignee: Intel Corporation
    Inventors: Ren Wang, Yipeng Wang, Tsung-Yuan Tai, Cristian Florin Dumitrescu, Xiangyang Guo
  • Patent number: 10789170
    Abstract: Various techniques are directed to a storage management method, an electronic device and a computer readable medium. Such techniques may involve: receiving a request for a target storage block in a disk; obtaining, from a cache, a cache indicator indicating a state of a group of storage blocks including the target storage block, the number of bits occupied by the cache indicator in the cache being less than the number of storage blocks in the group of storage blocks; and responding to the request based on the cache indicator. Such techniques can reduce times of access to the disk and thereby enhancing input/output performance.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: September 29, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Jianbin Kang, Hongpo Gao, Jian Gao, Lei Sun, Xiongcheng Li, Sheng Wang
  • Patent number: 10783121
    Abstract: A method and system for an optimized transfer of data files to a cloud storage service (CSS) are presented. The method comprises dividing a data file into a plurality of data blocks; assigning a block code to each of the plurality of data blocks; generating, based on a contract with the CSS, a first list of block codes from the plurality of data blocks, wherein the contract defines at least data blocks guaranteed to exist in the CSS; querying the CSS with the first list of block codes; responsive of the query, receiving a second list of block codes from the CSS, wherein the second list of block codes includes block codes of data blocks designated in the first list of block codes but missing in the CSS; and transmitting to the CSS data blocks designated by their block codes in the second list.
    Type: Grant
    Filed: March 31, 2015
    Date of Patent: September 22, 2020
    Assignee: CTERA NETWORKS, LTD.
    Inventor: Aron Brand
  • Patent number: 10776267
    Abstract: Mirrored byte addressable storage is disclosed. For example, first and second persistent memories store first and second pluralities of pages, both associated with a plurality of page states in a mirror state log in a third persistent memory. A mirror engine executing on a processor with a processor cache detects a write fault associated with the first page of the first plurality of pages and in response, updates a first page state to a dirty-nosync state. A notice of a flush operation of the processor cache associated with first data is received. The first data becomes persistent in the first page of the first plurality of pages after the flush operation; then the first page state is updated to a clean-nosync state. The first data is then copied to the first page of the second plurality of pages; then the first page state is updated to a clean-sync state.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: September 15, 2020
    Assignee: Red Hat, Inc.
    Inventors: Jeffrey E. Moyer, Vivek Goyal
  • Patent number: 10778597
    Abstract: A multi-cloud orchestration system includes a computer executed set of instructions that communicates with multiple computing clouds and/or computing clusters each having one or more resources for executing an application. The instructions are executed to receive information associated with an application, allocate a resource pool to be used for executing the application in which the resource pool including at least one resource from each of the computing clouds and/or computing clusters. The instructions may be further executed to provision the resources to execute the application.
    Type: Grant
    Filed: May 21, 2015
    Date of Patent: September 15, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Michael Tan, Akshaya Mahapatra, Peng Liu, Gilbert Lau
  • Patent number: 10769112
    Abstract: The present invention discloses a method for deduplication of a file, a computer program product, and an apparatus thereof. In the method, the file is partitioned into at least one composite block, wherein the composite block includes a fixed-size block and a variable-size block, the variable-size block being determined based on content of the file. Then a deduplication operation is performed on the at least one composite block.
    Type: Grant
    Filed: May 29, 2015
    Date of Patent: September 8, 2020
    Assignee: International Business Machines Corporation
    Inventor: Guo Feng Zhu
  • Patent number: 10768838
    Abstract: When a logical capacity of a nonvolatile semiconductor memory is increased, after a logical capacity which is allocated to a RAID group but unused is released, the RAID group is reconfigured to include the released logical capacity and the increased logical capacity. When the logical capacity of the nonvolatile semiconductor memory is reduced, after the reduced logical capacity is released from the RAID group, the RAID group is reconfigured with the released logical capacity.
    Type: Grant
    Filed: January 12, 2017
    Date of Patent: September 8, 2020
    Assignee: HITACHI, LTD.
    Inventors: Shimpei Nomura, Masahiro Tsuruya, Akifumi Suzuki
  • Patent number: 10761735
    Abstract: An embodiment of the invention provides a method comprising: permitting an application to be aware to be aware of a distribution of a data of the application across a cache and a permanent storage device. The cache comprises a solid state device and the permanent storage device comprises a disk or a memory. In yet another embodiment of the invention, an apparatus comprises: a caching application program interface configured to permit an application to be aware to be aware of a distribution of a data of the application across a cache and a permanent storage device. A caching application program interface is configured to determine an input/output strategy to consume the data based on the distribution of the data.
    Type: Grant
    Filed: December 4, 2018
    Date of Patent: September 1, 2020
    Assignee: PrimaryIO, Inc.
    Inventors: Sumit Kumar, Sumit Kapoor
  • Patent number: 10761989
    Abstract: Embodiments of the present disclosure provide a method of storage management, a storage system and a computer program product. The method comprises determining whether a number of I/O requests for a first page in a disk of a storage system exceeds a first threshold. The method further comprises: in response to determining that the number exceeds the first threshold, caching data in the first page to a first cache of the storage system; and storing metadata associated with the first page in a Non-Volatile Dual-In-Line Memory Module (NVDIMM) of the storage system.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: September 1, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Jian Gao, Lifeng Yang, Xinlei Xu, Liam Xiongcheng Li
  • Patent number: 10761990
    Abstract: Embodiments of the present disclosure relate to methods and devices for managing cache. The method comprises in response to receiving a read request, determining whether data associated with the read request is present in a first cache, the first cache being a read-only cache. The method also comprises in response to a miss of the data in the first cache, determining whether the data is present in a second cache, the second cache being a readable and writable cache. The method further comprises: in response to hitting the data in the second cache, returning the data as a response to the read request; and reading the data into the first cache.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: September 1, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Xinlei Xu, Jian Gao, Ruiyong Jia, Liam Xiongcheng Li, Jibing Dong
  • Patent number: 10754692
    Abstract: There are provided a memory controller and an operating method thereof. The memory controller includes a host interface layer for receiving a host program request and a host read request, a flash translation layer for generating and outputting a program command and a plurality of program addresses in response to the host program request, checking a program progress state for a program address corresponding to a target read address when the target read address corresponding to the host read request is included in the program addresses, and controlling a read operation on the target read address according to whether a program operation on the program address corresponding to the target read address has been completed, and a flash interface layer for transmitting a command and addresses, which are output from the flash translation layer, to a memory device.
    Type: Grant
    Filed: December 3, 2018
    Date of Patent: August 25, 2020
    Assignee: SK hynix Inc.
    Inventor: Joo Young Lee
  • Patent number: 10756979
    Abstract: Embodiments of the present invention provide a method and apparatus for performing cross-layer orchestration of resources in a data center having a multi-layer architecture. The method comprises: performing unified control of all resources in all layers of the data center; performing unified storage of all topologies and machine-generated data of all layers of the data center; and orchestrating the resources of the data center based on the unified control and the unified storage. Embodiments of the present invention provide a higher level orchestration than methods in the prior art, and employ some functions provided by methods in the prior art to provide a unified manner when the demand changes for orchestrating a layered cloud data center, in order to immediately provide a suitable capability.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: August 25, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Layne Lin Peng, Jie Bao, Grissom Tianqing Wang, Vivian Yun Zhang, Roby Qiyan Chen, Feng Golfen Guo, Kay Kai Yan, Yicang Wu
  • Patent number: 10754813
    Abstract: Methods, apparatus, and computer-accessible storage media for optimizing block storage I/O operations in a storage gateway. A write log may be implemented in a block store as a one-dimensional queue. A read cache may also be implemented in the block store. When non-ordered writes are received, sequential writes may be performed to the write log and the data may be written to contiguous locations on the storage. A metadata store may store metadata for the write log and the read cache. Reads may be satisfied from the write log if possible, or from the read cache or backend store if not. If blocks are read from the read cache or backend store to satisfy a read, the blocks may be mutated with data from the write log before being sent to the requesting process. The mutated blocks may be stored to the read cache.
    Type: Grant
    Filed: June 30, 2011
    Date of Patent: August 25, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: James Christopher Sorenson, III, Yun Lin, Satish Kumar Kotha, Ankur Khetrapal
  • Patent number: 10754792
    Abstract: Example implementations relate to persistent virtual address spaces. In one example, persistent virtual address spaces can employ a non-transitory processor readable medium including instructions to receive a whole data structure of a virtual address space (VAS) associated with a process, where the whole data structure includes data and metadata of the VAS, and store the data and the metadata of the VAS in a non-volatile memory to form a persistent VAS (PVAS).
    Type: Grant
    Filed: January 29, 2016
    Date of Patent: August 25, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Izzat El Hajj, Alexander Merritt, Gerd Zellweger, Dejan S. Milojicic
  • Patent number: 10754730
    Abstract: Provided are a computer program product, system, and method for copying point-in-time data in a storage to a point-in-time copy data location in advance of destaging data to the storage. A point-in-time copy is created to maintain tracks in a source storage unit as of a point-in-time. A source copy data structure indicates tracks in the source storage unit to copy from the storage to a point-in-time data location. An update to write to a source track is received and a determination is made as to whether the source copy data structure indicates to copy the source track from the storage to the point-in-time data location. The update is written to a cache. A copy operation is initiated to copy the source track from the storage to the point-in-time data location asynchronous before the source track is destaged from the cache to the storage unit.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: August 25, 2020
    Assignee: International Business Machines Corporation
    Inventors: Theresa M. Brown, Kevin Lin, David Fei, Nedlaya Y. Francisco
  • Patent number: 10748874
    Abstract: A three-dimensional stacked integrated circuit (3D SIC) having a non-volatile memory die, a volatile memory die, a logic die, and a thermal management component. The non-volatile memory die, the volatile memory die, the logic die, and the thermal management component are stacked. The thermal management component can be stacked in between the non-volatile memory die and the logic die, stacked in between the volatile memory die and the logic die, or both.
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: August 18, 2020
    Assignee: Micron Technology, Inc.
    Inventor: Tony M. Brewer
  • Patent number: 10740261
    Abstract: A system and method for early data pipeline lookup in large cache design is provided. An embodiment of the disclosure includes searching one or more tag entries of a tag array for a tag portion of the memory access request and simultaneously with searching the tag array, searching a data work queue of a data array by comparing a set identifier portion of the memory access request with one or more data work queue entries stored in the data work queue, generating a pending work indicator indicating whether at least one data work queue entry exists in the data work queue that corresponds to the set identifier portion, and sending the memory access request to the data array or storing the memory access request in a side buffer associated with the tag array based on the pending work indicator and a search result of the tag array search.
    Type: Grant
    Filed: May 12, 2017
    Date of Patent: August 11, 2020
    Assignee: LG ELECTRONICS INC.
    Inventors: Arkadi Avrukin, Thomas Zou
  • Patent number: 10733185
    Abstract: A method for optimizing memory access for database operations is provided. The method may include identifying an access pattern associated with a database operation. The access pattern may include data required to perform the database operation. One or more memory pages may be generated based at least on the access pattern. The one or more memory pages may include at least a portion of the data required to perform the database operation. The one or more memory pages including at least the portion of the data required to perform the database operation may be stored in a main memory. The database operation may be performed by at least loading, from the main memory and into a cache, the one or more memory pages including at least the portion of the data required to perform the database operation. Related systems and articles of manufacture, including computer program products, are also provided.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: August 4, 2020
    Assignee: SAP SE
    Inventors: Georgios Psaropoulos, Thomas Legler, Norman May, Anastasia Ailamaki
  • Patent number: 10733105
    Abstract: According to some embodiments, a backup storage system receives a request from a client at a storage system for accessing data segments. For each of a first groups of the data segments requested that are stored in a solid state device (SSD) cache, the system requests a first batch job for each of the first groups to retrieve the first groups of the data segments from the SSD cache via a first set of input/output (IO) threads. For each of a second groups of the data segments requested that are not stored in the SSD cache, the system requests a second batch job for each of the second groups to retrieve the second groups of the data segments from storage units of the storage system via a second set of input/output (IO) threads. The system assembles received segments and returns them to the client altogether.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: August 4, 2020
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Satish Visvanathan, Rahul B. Ugale
  • Patent number: 10721300
    Abstract: A method and a system to optimize the transfer of data chunks between Source Devices and Destination Devices using a transfer administer is described, wherein the Source Devices, the Destination Devices and the transfer administrator are interspersed in a Collaborative Work Environment, and wherein the optimization is accomplished by performing radio frequency (RF) signal based handshakes between the Source Devices and the transfer administrator.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: July 21, 2020
    Assignee: ARC Document Solutions, LLC
    Inventors: Rahul Roy, Srinivasa Rao Mukkamala, Himadri Majumder, Dipali Bhattacharya
  • Patent number: 10714176
    Abstract: Aspects of the present disclosure configure a media controller of a memory component to skip execution of a read-write cycle for specific data if the media controller has not observed at least one prior data modification request from a memory sub-system controller that causes modification of the specific data. For example, a media controller of a first memory component can be configured to include a data modification tracker to monitor a memory channel for data modification requests to a second memory component and to track data modification requests that have been observed by the media controller on the memory channel, where the memory channel may be one shared by the first and second memory components.
    Type: Grant
    Filed: August 14, 2018
    Date of Patent: July 14, 2020
    Assignee: Micron Technology, Inc.
    Inventor: Jeffrey Frederiksen
  • Patent number: 10712971
    Abstract: In an example method, write commands for a solid-state storage medium having storage region are received. Selected write commands are filtered out according to criteria. The selected write commands are cached. Writing pursuant to the selected write commands is aggregated to within boundaries of one of the storage regions of the storage medium.
    Type: Grant
    Filed: October 23, 2015
    Date of Patent: July 14, 2020
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Christoph J Graham, Thomas J Flynn, Virginia Q Herrera
  • Patent number: 10706063
    Abstract: A system for contextual data collection and extraction is provided, comprising an extraction engine configured to receive context from a user for desired information to extract, connect to a data source providing a richly formatted dataset, retrieve the richly formatted dataset, process the richly formatted dataset and extract information from a plurality of linguistic modalities within the richly formatted, and transform the extracted data into a extracted dataset; and a knowledge base construction service configured to retrieve the extracted dataset, create a knowledge base for storing the extracted dataset, and store the knowledge base in a data store.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: July 7, 2020
    Assignee: QOMPLX, INC.
    Inventors: Jason Crabtree, Andrew Sellers
  • Patent number: 10705977
    Abstract: Examples may include techniques to improve cache performance in a computing system. An eviction service may be used to manage a dirty list and a clean list, set a cache line to hot, set a cache line to clean, set a cache line to dirty, and evict a cache line from the cache. A cache engine may be used to write data into the cache at a cache line, request the eviction service to set the cache line to dirty, and manage a dirty cache lines counter for each chunk of the primary memory. A cleaning thread may be used to determine a dirtiest chunk of a primary memory, get a cache line of the dirtiest chunk, and when the cache line of the dirtiest chunk is dirty, read the cache line to get data from the cache, write the data to primary memory, request the eviction service to set the cache line to clean, and manage the dirty cache lines counters.
    Type: Grant
    Filed: March 2, 2018
    Date of Patent: July 7, 2020
    Assignee: Intel Corporation
    Inventors: Mariusz Barczak, Igor Konopko, Adam Rutkowski
  • Patent number: 10698831
    Abstract: Embodiments of the present disclosure relates to a method and device of data access. The method comprises determining whether target data stored in a non-volatile storage device is cached in a memory. The target data is organized in a first level of a multi-way tree in the storage device. The method further comprises, in response to determining that the target data is missing in the memory, moving the target data from the storage device into the memory. Besides, the method comprises, in response to the target data being accessed from the memory, adding a reference to the target data to a first list, the first list recording a sequence for accessing data in the first level.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: June 30, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Qiaosheng Zhou, Junping Zhao, Xinlei Xu, Wilson Hu, Jun Wu
  • Patent number: 10698826
    Abstract: The disclosure is related to storage devices employing file-aware drivers. In one example, a device may comprise a driver configured to retrieve file system information related to an input/output (I/O) command, determine storage attributes based on the file system information, and store selected data in a preferred region of a data storage medium based on the storage attributes. Another embodiment may be a method comprising inspecting characteristics of an I/O request for a file, setting storage attributes for the file based on if the file is preferred, and storing the file on a data storage medium based on the storage attributes.
    Type: Grant
    Filed: April 5, 2012
    Date of Patent: June 30, 2020
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventors: Daniel Robert McLeran, Steven Scott Williams
  • Patent number: 10701177
    Abstract: Techniques for recovering from session failures between clients and database servers are described herein. A session may be established between a client and a first database server to handle a database query for the client. A command of the session may be received by the first database server from the client. Data requested by the command may be retrieved. Prior to responding to the command, the data is spooled to a session state stored in a repository of the first database server, and the session state is replicated to one or more additional database servers. The session state stored in the repository of the first database server enables the first database server and client to recover from a failure of the session. The replicated session state enables the additional database server(s) to reestablish the session and respond to the command, instead of the first database server, if the session fails.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: June 30, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Matthew Alban Neerincx, Luiz Fernando Federico Dos Santos, Oleg Ignat, David Bruce Lomet, Quetzalcoatl Bradley, Raghu Ram, Chadwin James Mumford, Peter Gvozdjak, Balendran Mugundan
  • Patent number: 10698629
    Abstract: Systems, methods, and non-transitory computer readable media are configured to determine a request corresponding to a portion of data. A placement configuration associated with the portion of data can be determined. The placement configuration can belong to a set of placement configurations. A datacenter identified by the placement configuration can be selected. Subsequently, the portion of data can be accessed at the selected datacenter.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: June 30, 2020
    Assignee: Facebook, Inc.
    Inventors: Muthukaruppan Annamalai, Harish Srinivas, Kaushik Ravichandran, Igor A. Zinkovsky, Luning Pan
  • Patent number: 10698837
    Abstract: The disclosure relates to a method for processing a memory and apparatus, an electronic device, and a computer-readable storage medium. The method includes acquiring reclaimable memory pages occupied by an application to be processed; acquiring an idle duration of the application to be processed for each reclaimable memory page; determining a duration threshold according to the idle durations for the reclaimable memory pages; and selecting from the reclaimable memory pages a memory page for which the idle duration exceeds the duration threshold and reclaiming the memory page. The above-mentioned method for processing a memory and apparatus, electronic device and computer-readable storage medium may minimize the adverse impact on each application, thereby maintaining the balance between reclaiming and operation of an application memory to be processed.
    Type: Grant
    Filed: September 21, 2018
    Date of Patent: June 30, 2020
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD
    Inventors: Pan Fang, Yan Chen
  • Patent number: 10698613
    Abstract: A host system performs I/O processing functions traditionally performed on storage systems. Metadata about data stored on the storage system may be stored on the host system, including metadata about the data stored in a cache of the storage system. The SSI may be configured to determine whether an I/O operation is a read or write operation. If the I/O operation is a read operation, the SSI may determine from metadata stored thereon whether the data to be read is in cache. If the data is in cache, the SSI may read the data directly from cache over the internal fabric without use of CPU resources of a director, and, in some embodiments, without use of a director at all. If the data is not in cache, the SSI may read the data directly from the physical storage device over the internal fabric without use of a director.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: June 30, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Ian Wigmore, Alesia A. Tringale, Jason J. Duquette
  • Patent number: 10691615
    Abstract: A system includes reception of a first request to synchronize content from the persistent memory system to the volatile memory system, and, in response to the first request, retrieval of the content from the persistent memory system and store the content in the volatile memory system. A create, read, update or delete operation is performed on the content stored in the volatile memory system to generate modified content in the volatile memory system, a second request to synchronize content is received from the volatile memory system to the persistent memory system, and, in response to the second request, the modified content is retrieved from the volatile memory system and the modified content is stored in the persistent memory system.
    Type: Grant
    Filed: October 10, 2017
    Date of Patent: June 23, 2020
    Assignee: SAP SE
    Inventor: Johnson Wong
  • Patent number: 10691622
    Abstract: A computing device requests access to an application object from a remote storage system in order to locally execute application functionality without hosting application resources. An accessed object is associated with an intent in the storage system and locked. Locking an object in combination with an intent prevents computing devices that are not performing the intent from accessing the object. An intent defines one or more operations to be performed with the requested object, which are serialized as intent steps and stored in the storage system. Upon executing an intent step, the computing device stores a log entry at the storage system signifying the step's completion. A locked object remains locked until the log entries indicate every intent step as complete. Different computing devices can unlock a locked object by executing any incomplete steps of an intent associated with the locked object.
    Type: Grant
    Filed: September 19, 2017
    Date of Patent: June 23, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Lidong Zhou, Jacob R. Lorch, Jinglei Ren, Parveen Kumar Patel, Srinath Setty
  • Patent number: 10684996
    Abstract: A distributed storage system maintains multiple logically independent file systems. Each file system includes a data set stored by a storage device of the distributed storage system. During operation, access pattern levels for the multiple logically independent file systems are determined. Thereafter, the data sets included in the multiple logically independent file systems are redistributed across multiple storage devices of the distributed storage. Redistribution of a particular data set is based at least in part on the particular file system including the particular data set and on the determined access pattern levels for the multiple logically independent file systems. In addition, each disk of a plurality of disks in the distributed storage includes a physically separated partition dedicated to storing the data of the file system that is most frequently accessed. The distribution of data is based at least in part on the presence of the physically separated partition.
    Type: Grant
    Filed: September 12, 2017
    Date of Patent: June 16, 2020
    Assignee: Quantcast Corporation
    Inventor: Silvius V. Rus
  • Patent number: 10671526
    Abstract: An electronic computing device, a method for adjusting the trigger mechanism of a garbage collection function, and a non-transitory computer readable storage medium thereof are provided. A storage unit of the electronic computing device stores a whitelist. The whitelist records a plurality of data sets, wherein each of the data sets has a name of an application and an offset value of the application. A processor of the electronic computing device executes a system program. The system program loads the whitelist into a memory during a initialization procedure. The system program detects that a specific application is triggered and retrieves the offset value of the specific application from the whitelist in the memory according to the name of the specific application. The system program forks a process to the specific application and updates a threshold of a garbage collection function according to the offset value.
    Type: Grant
    Filed: October 26, 2017
    Date of Patent: June 2, 2020
    Assignee: HTC CORPORATION
    Inventors: Wen-Yuan Ho, Yi-Fan Zhang, Xiao-Ting Hong
  • Patent number: 10664189
    Abstract: A method for improving I/O performance in synchronous data replication environments is disclosed. In one embodiment, such a method includes receiving write data into a primary write cache of a primary storage system. The method synchronously mirrors the write data from the primary write cache to a secondary write cache of a secondary storage system. The method is further configured to detect when the primary write cache is full. When the primary write cache is full, the method temporarily uses primary read cache of the primary storage system to store incoming write data. This incoming write data is mirrored from the primary read cache to the secondary write cache of the secondary storage system. A corresponding system and computer program product are also disclosed herein.
    Type: Grant
    Filed: August 27, 2018
    Date of Patent: May 26, 2020
    Assignee: International Business Machines Corporation
    Inventor: Xue Qiang Zhou
  • Patent number: 10656848
    Abstract: A method for avoiding data loss in a storage system is disclosed. In one embodiment, such a method includes monitoring a degradation level associated with a battery. The battery provides backup power to a storage system in the event of a primary power outage. The storage system includes volatile storage media storing modified data to destage to more persistent storage media, such as an array of storage drives. In the event the degradation level crosses a designated threshold, the method automatically takes steps to alter a time period needed to completely copy the modified data off of the volatile storage media. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: June 17, 2018
    Date of Patent: May 19, 2020
    Assignee: International Business Machines Corporation
    Inventors: Matthew G. Borlick, Micah Robison, John C. Elliott, Kevin J. Ash, Lokesh M. Gupta, Brian A. Rinaldi
  • Patent number: 10656849
    Abstract: With omission of a duplication process of compressed data, a cache access frequency is reduced to improve throughput. A storage system includes first and second control units and a storage drive. Upon receiving a data write command, the first control unit stores data to be subjected to the write command in a first cache area of the first control unit, and stores the data in a second cache area of the second control unit to perform duplication, and upon completion of the duplication, the first control unit transmits a response indicating an end of write, performs a predetermined process on the data to be subjected to the write command, stores the data in a buffer area, reads the data stored in the buffer area, and transmits the read data to the storage drive.
    Type: Grant
    Filed: August 30, 2018
    Date of Patent: May 19, 2020
    Assignee: Hitachi, Ltd.
    Inventors: Kazuki Matsugami, Yoshihiro Yoshii, Nobumitsu Takaoka, Tomohiro Kawaguchi