Caching Patents (Class 711/113)
  • Patent number: 11335376
    Abstract: A data storage device includes a primary storage media, a drive storage controller electrically coupled to media recording electronics and a controller-override mechanism. The controller-override mechanism is selectively controllable by a user to override control actions of the drive storage controller to prevent the drive storage controller from altering the primary storage media at a time when the storage device is otherwise configured for nominal data storage operations.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: May 17, 2022
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventors: Riyan Alex Mendonsa, Brett R. Herdendorf, Jon D. Trantham, Kevin Lee Van Pelt
  • Patent number: 11334497
    Abstract: Client data is structured as a set of data blocks. A first subset of data blocks is stored on a current segment of a plurality of disks. A second subset of data blocks is stored on a previous segment. A request to clean client data is received. The request includes a request to update the current segment to include the second subset of data blocks. The second subset of data blocks is accessed and transmitted from a lower layer to a higher layer of the system. Parity data is generated at the higher layer. The parity data is transmitted to the lower layer. The lower layer is employed to generate a local copy of the second subset of data blocks. Each local address that references the local copy of the second subset of data blocks is included in the current segment. The parity data is written in the current segment.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: May 17, 2022
    Assignee: VMware, Inc.
    Inventors: Wenguang Wang, Vamsidhar Gunturu
  • Patent number: 11327660
    Abstract: A storage system having high reliability and IO processing performance is realized. The storage system includes: a first arithmetic unit configured to receive an input and output request and perform data input and output processing; a first memory connected to the first arithmetic unit; a plurality of storage drives configured to store data; a second arithmetic unit; and a second memory connected to the second arithmetic unit. The first arithmetic unit instructs the storage drive to read data, the storage drive reads the data and stores the data in the second memory, the second arithmetic unit stores the data stored in the second memory in the first memory, and the first arithmetic unit transmits the data stored in the first memory to a request source of a read request for the data.
    Type: Grant
    Filed: August 18, 2020
    Date of Patent: May 10, 2022
    Assignee: HITACHI, LTD.
    Inventors: Takashi Nagao, Yuusaku Kiyota, Hideaki Monji, Tomohiro Yoshihara
  • Patent number: 11327950
    Abstract: A system for ledger data includes a block repository, a metadata database, and a processor. The block repository stores verified secure ledger data in one or more blocks that are cryptographically linked. The metadata database stores metadata information for the one or more blocks in the block repository. The processor is configured to receive an indication to check data in a block and to mark the block as being verified in the metadata information associated with the block.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: May 10, 2022
    Assignee: Workday, Inc.
    Inventors: Parvinder Singh Thapar, Bradley Hoyle, Dirk Nicholas Dougherty
  • Patent number: 11327905
    Abstract: A computing device requests access to an application object from a remote storage system in order to locally execute application functionality without hosting application resources. An accessed object is associated with an intent in the storage system and locked. Locking an object in combination with an intent prevents computing devices that are not performing the intent from accessing the object. An intent defines one or more operations to be performed with the requested object, which are serialized as intent steps and stored in the storage system. Upon executing an intent step, the computing device stores a log entry at the storage system signifying the step's completion. A locked object remains locked until the log entries indicate every intent step as complete. Different computing devices can unlock a locked object by executing any incomplete steps of an intent associated with the locked object.
    Type: Grant
    Filed: May 19, 2020
    Date of Patent: May 10, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Lidong Zhou, Jacob R. Lorch, Jinglei Ren, Parveen Kumar Patel, Srinath Setty
  • Patent number: 11321133
    Abstract: Provided are a computer program product, system, and method for using a machine learning module to determine an allocation of stage and destage tasks. Storage performance information related to processing of Input/Output (I/O) requests with respect to the storage unit is provided to a machine learning module. The machine learning module receives a computed number of stage tasks and a computed number of destage tasks. A current number of stage tasks allocated to stage tracks from the storage unit to the cache is adjusted based on the computed number of stage tasks. A current number of destage tasks allocated to destage tracks from the cache to the storage unit is adjusted based on the computed number of destage tasks.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: May 3, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Matthew G. Borlick, Kevin J. Ash
  • Patent number: 11321234
    Abstract: Provided are a computer program product, system, and method for using mirroring cache list to demote modified tracks from cache A modified track for a primary storage stored in the cache to mirror to a secondary storage is indicated in a mirroring cache list. The mirroring cache list is processed to select modified tracks in the cache to transfer to the secondary storage that have not yet been transferred. The selected modified tracks in the cache are transferred to the secondary storage. The mirroring cache list is processed to determine modified tracks in the cache to demote from the cache.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: May 3, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh Mohan Gupta, Kevin J. Ash, Kyler A. Anderson, Matthew J. Kalos
  • Patent number: 11314764
    Abstract: A system for contextual data collection and extraction is provided, comprising an extraction engine configured to receive context from a user for desired information to extract, connect to a data source providing a richly formatted dataset, retrieve the richly formatted dataset, process the richly formatted dataset and extract information from a plurality of linguistic modalities within the richly formatted, and transform the extracted data into a extracted dataset; and a knowledge base construction service configured to retrieve the extracted dataset, create a knowledge base for storing the extracted dataset, and store the knowledge base in a data store.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: April 26, 2022
    Assignee: QOMPLX, INC.
    Inventors: Jason Crabtree, Andrew Sellers
  • Patent number: 11314546
    Abstract: A technique for executing a containerized stateful application that is deployed on a stateless computing platform is disclosed. The technique involves deploying a containerized stateful application on a stateless computing platform and executing the stateful application on the stateless computing platform. The technique also involves during execution of the stateful application, evaluating, in an application virtualization layer, events that are generated during execution of the stateful application to identify events that may trigger a change in state of the stateful application and during execution of the stateful application, updating a set of storage objects in response to the evaluations, and during execution of the stateful application, comparing events that are generated by the stateful application to the set of storage objects and redirecting a storage object that corresponds to an event to a persistent data store if the storage object matches a storage object in the set of storage objects.
    Type: Grant
    Filed: November 18, 2016
    Date of Patent: April 26, 2022
    Assignee: DATA ACCELERATOR LTD
    Inventors: Priya Saxena, Matthew Philip Clothier
  • Patent number: 11294822
    Abstract: Disclosed is a method of operating a non-volatile memory device. A method of operating a non-volatile memory device according to an embodiment of the present disclosure, in a method of operating a non-volatile memory device including a log storage area, a data storage area, and an ACK generation unit, may include receiving a log and data from a cache memory, storing the received log in the log storage area, storing the received data in the data storage area, and transmitting an ACK signal to the cache memory according to a result of storing the log and the data.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: April 5, 2022
    Assignee: Research & Business Foundation Sungkyunkwan University
    Inventors: Tae Hee Han, Jeong Beom Hong, Yong Wook Kim, Min Gu Kang, Jo Eun Lee
  • Patent number: 11294570
    Abstract: Data compression is performed on a storage system for which one or more host systems have direct access to data on the storage system. The storage system may compress the data for one or more logical storage units (LSUs) having data stored thereon, and may update compression metadata associated with the LSUs and/or the data portions thereof to reflect that the data is compressed. In response to a read request for a data portion received from a host application executing on the host system, compression metadata for the data portion may be accessed. If it is determined from the compression metadata that the data portion is compressed, the data compression metadata for the data portion may be further analyzed to determine how to decompress the data portion. The data portion may be retrieved and decompressed, and the decompressed data may be returned to the requesting application.
    Type: Grant
    Filed: January 15, 2020
    Date of Patent: April 5, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Ian Wigmore, Gabriel Benhanokh, Arieh Don, Alesia A. Tringale
  • Patent number: 11288017
    Abstract: In certain aspects, a data storage device is provided including a distributed controller configured to communicate with a main controller; and first and second memory devices of respective first and second non-volatile memory technologies. The first and second memory devices are coupled to the distributed controller configured to control access to the first and second memory devices. In certain aspects, a system is provided including a main controller; first and second distributed controllers coupled to the main controller; at least one first memory device coupled to the first distributed controller; and at least one second memory device coupled to the second distributed controller. The main controller is configured to control access to the first and second distributed controllers. The first and second distributed controllers are configured to control access to the respective at least one first and second memory devices that include at least two non-volatile memory technologies.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: March 29, 2022
    Assignee: SMART IOPS, INC.
    Inventors: Manuel Antonio d'Abreu, Ashutosh Kumar Das
  • Patent number: 11281497
    Abstract: Provided are a computer program product, system, and method for using a machine learning module to determine an allocation of stage and destage tasks. Storage performance information related to processing of Input/Output (I/O) requests with respect to the storage unit is provided to a machine learning module. The machine learning module receives a computed number of stage tasks and a computed number of destage tasks. A current number of stage tasks allocated to stage tracks from the storage unit to the cache is adjusted based on the computed number of stage tasks. A current number of destage tasks allocated to destage tracks from the cache to the storage unit is adjusted based on the computed number of destage tasks.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: March 22, 2022
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Matthew G. Borlick, Kevin J. Ash
  • Patent number: 11262919
    Abstract: Client data is structured as a set of data blocks. A first subset of data blocks is stored on a current segment of the disks. A second subset of data blocks is stored on a previous segment. A request to clean client data is received, including a request to update the current segment to include the second subset of data blocks. The second subset of data blocks is accessed and transmitted from a lower layer to a higher system layer. Parity data is generated at the higher layer. The parity data is transmitted to the lower layer. The lower layer updates second mapping data. In the updated mapping of the second mapping data, each local address that references a data block of the second subset of data blocks is included in the current segment of the plurality of disks. The lower layer writes the parity data in the current segment.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: March 1, 2022
    Assignee: VMware, Inc.
    Inventors: Wenguang Wang, Vamsidhar Gunturu
  • Patent number: 11263097
    Abstract: Provided are a computer program product, system, and method for using a track format code in a cache control block for a track in a cache to process read and write requests to the track in the cache. A track format table associates track format codes with track format metadata. A determination is made as to whether the track format table has track format metadata matching track format metadata of a track staged into the cache. A determination is made as to whether a track format code from the track format table for the track format metadata in the track format table matches the track format metadata of the track staged. A cache control block for the track being added to the cache is generated including the determined track format code when the track format table has the matching track format metadata.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: March 1, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta, Matthew J. Kalos, Beth A. Peterson
  • Patent number: 11263146
    Abstract: Systems for accessing client data is described. A request to access a first data block is received. The request indicates a first logical address referencing the first data block. First mapping data is employed to identify a first physical addresses corresponding to the first logical addresses. The first mapping data encodes a first LOM transaction ID and candidate local addresses. The first mapping data is employed to identify the candidate local address and the first LOM transaction ID. A usage table is employed to determine the current status of the first LOM transaction ID. The candidate local address is employed to access the first data block. Second mapping data is employed to identify an updated local address of the set of local addresses. The updated local address currently references the first data block. The updated local address is employed to access the first data block.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: March 1, 2022
    Assignee: VMware, Inc.
    Inventors: Wenguang Wang, Eric Knauft, Vamsidhar Gunturu
  • Patent number: 11263139
    Abstract: A processing system includes a cache, a host memory, a CPU and a hardware accelerator. The CPU accesses the cache and the host memory and generates at least one instruction. The hardware accelerator operates in a non-temporal access mode or a temporal access mode according to the access behavior of the instruction. The hardware accelerator accesses the host memory through an accelerator interface when the hardware accelerator operates in the non-temporal access mode, and accesses the cache through the accelerator interface when the hardware accelerator operates in the temporal access mode.
    Type: Grant
    Filed: October 30, 2017
    Date of Patent: March 1, 2022
    Assignee: SHANGHAI ZHAOXIN SEMICONDUCTOR CO., LTD.
    Inventors: Di Hu, Zongpu Qi, Wei Zhao, Jin Yu, Lei Meng
  • Patent number: 11249834
    Abstract: An apparatus includes at least one processing device comprising a processor coupled to a memory, with the processing device being configured to maintain at least first and second journals for respective first and second different types of input-output requests, to move one or more entries between the first journal and the second journal under one or more specified conditions, to perform a clean-up operation for at least one of the first and second journals in conjunction with the moving of the one or more entries, and responsive to a failure occurring during the clean-up operation, to execute a contention resolution algorithm to resolve logical address range lock contentions between different entries of the first and second journals. The processing device illustratively comprises a storage controller of a storage system. The storage system may be, for example, a source storage system configured to carry out a synchronous replication process with a target storage system.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: February 15, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Xiangping Chen, Svetlana Kronrod
  • Patent number: 11250016
    Abstract: Systems, methods, and articles of manufacture provide for simplified and partially-automated data operation services, such as data transfer, storage, management, and analysis operations. Non-IT data consumers may, for example, initiate such data operations by providing only a subset of the required parameters for the operation, with the specially-coded system automatically fetching any missing parameters or values from one or more metadata stores and initiating the requested operation.
    Type: Grant
    Filed: March 6, 2020
    Date of Patent: February 15, 2022
    Assignee: The Travelers Indemnity Company
    Inventors: Venu Challagolla, Venkatraman Raman
  • Patent number: 11249812
    Abstract: Methods, systems, and computer-readable storage media for determining, by an application instance, that first data is to be requested, transmitting, by a total outage compensator of the application instance, one or more requests for the first data to one or more peer application instances, receiving, by the total outage compensator, a response to a request for the first data, the response including the data, and executing, by the instance of the application instance, at least one function based on the first data.
    Type: Grant
    Filed: July 25, 2019
    Date of Patent: February 15, 2022
    Assignee: SAP SE
    Inventors: Peter Eberlein, Volker Driesen
  • Patent number: 11245774
    Abstract: Described herein are systems and techniques to efficiently cache data for streaming applications. A cache can be organized to include multiple cache segments, and each cache segment can include multiple cache blocks. A cache entry can be created for streaming data, and the streaming data can be streamed directly into a first cache block. When the first cache block is full, a next cache block can be identified, in a same cache segment or in a new cache segment. The streaming data can be streamed directly into the next cache block, and into any further cache blocks as needed.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: February 8, 2022
    Assignee: EMC IP Holding Company LLC
    Inventor: Andrei Paduroiu
  • Patent number: 11243973
    Abstract: A system for contextual data collection and extraction is provided, comprising an extraction engine configured to receive context from a user for desired information to extract, connect to a data source providing a richly formatted dataset, retrieve the richly formatted dataset, process the richly formatted dataset and extract information from a plurality of linguistic modalities within the richly formatted, and transform the extracted data into a extracted dataset; and a knowledge base construction service configured to retrieve the extracted dataset, create a knowledge base for storing the extracted dataset, and store the knowledge base in a data store.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: February 8, 2022
    Assignee: QOMPLX, INC.
    Inventors: Jason Crabtree, Andrew Sellers
  • Patent number: 11244717
    Abstract: Methods, systems, and devices for write operation techniques for memory systems are described. In some memory systems, write operations performed on target memory cells of the memory device may disturb logic states stored by one or more adjacent memory cells. Such disturbances may cause reductions in read margins when accessing one or more memory cells, or may cause a loss of data in one or more memory cells. The described techniques may reduce aspects of logic state degradation by supporting operational modes where a host device, a memory device, or both, refrains from writing information to a region of a memory array, or inhibits write commands associated with write operations on a region of a memory array.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: February 8, 2022
    Assignee: Micron Technology, Inc.
    Inventors: Zhongyuan Lu, Christina Papagianni, Hongmei Wang, Robert J. Gleixner
  • Patent number: 11237917
    Abstract: A method includes obtaining data associated with a volatile storage device and a non-volatile storage device of an information handling system during a normal mode of operation of the information handling system, and calculating a first data transfer frequency and a first transfer data size from the volatile storage device to the non-volatile storage device based on the data associated with the volatile storage device and the non-volatile storage device during the normal mode of operation of the information handling system. The method also includes detecting an event indicating a power outage of the information handling system, and in response to the detecting the event, determining a data management policy to be applied to the information handling system during the safe mode of operation of the information handling system.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: February 1, 2022
    Assignee: Dell Products L.P.
    Inventors: Prasoon Kumar Sinha, Karthik Sethuramalingam, Suman Lal Banik, Ravishankar Kanakapura N
  • Patent number: 11238010
    Abstract: A method is disclosed comprising: generating a first snapshot of a first storage subsystem; detecting, by a management node, that all in-flight data storage requests recorded in drain tables of storage nodes in the first storage subsystem have been completed, the in-flight data storage requests recorded in the drain tables of the storage nodes being replicated in a second storage subsystem; causing, by the management node, each of the storage nodes to flip the respective designations of the tracking tables in the node's respective pair of tracking tables; and transmitting, from the management node to the second storage subsystem, an instruction which when received by the second storage subsystem causes the second storage subsystem to generate a second snapshot of the second storage subsystem.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: February 1, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Ying Hu, Xiangping Chen
  • Patent number: 11231867
    Abstract: Techniques for processing write operations may include: receiving, at a first data storage system, a first write operation that writes first data to a first device, wherein the first device is configured for replication on a second device of a second data storage system; performing first processing that determines whether the first data written by the first write operation is a duplicate of an existing entry in a first hash table of the first data storage system; responsive to determining the first data written by the first write operation is a duplicate of an existing entry in the first hash table, performing second processing; responsive to determining the first data written by the first write operation is unique and is not a duplicate of an existing entry in the first hash table, performing third processing; and transmitting the final buffer to the second data storage system.
    Type: Grant
    Filed: January 25, 2018
    Date of Patent: January 25, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Venkata L R Ippatapu, Kenneth Dorman
  • Patent number: 11232054
    Abstract: In certain embodiments, a memory module includes a printed circuit board (PCB) having an interface that couples it to a host system for provision of power, data, address and control signals. First, second, and third buck converters receive a pre-regulated input voltage and produce first, second and third regulated voltages. A converter circuit reduces the pre-regulated input voltage to provide a fourth regulated voltage. Synchronous dynamic random access memory (SDRAM) devices are coupled to one or more regulated voltages of the first, second, third and fourth regulated voltages, and a voltage monitor circuit monitors an input voltage and produces a signal in response to the input voltage having a voltage amplitude that is greater than a threshold voltage.
    Type: Grant
    Filed: May 24, 2021
    Date of Patent: January 25, 2022
    Assignee: NETLIST, INC.
    Inventors: Chi-She Chen, Jeffrey C. Solomon, Scott H. Milton, Jayesh Bhakta
  • Patent number: 11226899
    Abstract: Provided are a computer program product, system, and method for populating a second cache with tracks from a first cache when transferring management of the tracks from a first node to a second node. Management of a first group of tracks in the storage managed by the first node is transferred to the second node managing access to a second group of tracks in the storage. After the transferring the management of the tracks, the second node manages access to the first and second groups of tracks and caches accessed tracks from the first and second groups in the second cache of the second node. The second cache of the second node is populated with the tracks in a first cache of the first node.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: January 18, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta, Matthew J. Kalos, Brian A. Rinaldi
  • Patent number: 11221886
    Abstract: Embodiments for optimizing dynamic resource allocations in a disaggregated computing environment. A new workload is assigned to a subset of a plurality of processors, the subset of processors assigned a subset of a plurality of cache devices. A determination is made that the new workload is categorized as a cache-friendly workload having a memory need which can be met primarily by the subset of cache devices by identifying that underlying data necessitated by the new workload resides primarily within the subset of cache devices. Pursuant to determining the new workload is the cache-friendly workload, a cache related action is performed to increase performance of the new workload executed by the subset of processors and commensurately executes additional workloads performed by other ones of the plurality of processors within the disaggregated computing environment.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: January 11, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: John A. Bivens, Ruchi Mahindru, Eugen Schenfeld, Min Li, Valentina Salapura
  • Patent number: 11209982
    Abstract: Operating a data storage system comprising a plurality of disk drives and a storage controller connected to the disk drives. A first subset and a second subset of the plurality of disk drives are operated as short stroked disk drives and non-short stroked disk drives, respectively. Priority storage spaces are defined including a high priority storage space, a medium priority storage space, and a low priority storage space. Data is received including associated access rates for each portion of the data. One of the priority storage spaces is identified to store a portion of the data, based on the access rates for each portion of the data. Data accessed most frequently is stored in the high priority storage space, data accessed least frequently is stored in the low priority storage space, and the remaining data is stored in the medium priority storage space.
    Type: Grant
    Filed: January 9, 2020
    Date of Patent: December 28, 2021
    Assignee: International Business Machines Corporation
    Inventors: John P. Agombar, Ian Boden, Alastair Cooper, Gordon D. Hutchison
  • Patent number: 11210227
    Abstract: A method for demoting data from a cache comprising heterogeneous memory types is disclosed. The method maintains, for a data element in the cache, a write access count that is incremented each time the data element is updated in the cache. The cache includes a higher performance portion and a lower performance portion. The method removes the data element from the higher performance portion in accordance with a cache demotion algorithm. If the data element also resides in the lower performance portion and the write access count is below a first threshold, the method leaves the data element in the lower performance portion. If the data element also resides in the lower performance portion and the write access count is at or above the first threshold, the method removes the data element from the lower performance portion. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: December 28, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Matthew G. Borlick, Kyler A. Anderson, Kevin J. Ash
  • Patent number: 11210012
    Abstract: A data storage device includes: a first level memory including a first zone and a second zone, a size ratio of the first zone to the second zone being dynamically adjusted; a second level memory including a third zone and a fourth zone, a size ratio of the third zone to the fourth zone being dynamically adjusted according to the size ratio of the first zone to the second zone; and a controller configured to control data movement among the first to fourth zones, compare a counting value obtained based on the data movement with a reference value, and dynamically adjust the size ratio of the first zone to the second zone and the size ratio of the third zone to the fourth zone according to a result of comparing.
    Type: Grant
    Filed: July 28, 2020
    Date of Patent: December 28, 2021
    Assignee: SK hynix Inc.
    Inventor: Kyung Soo Lee
  • Patent number: 11204876
    Abstract: A method for controlling a memory from which data is transferred to a neural network processor and an apparatus thereof are provided, the method including: generating prefetch information of data by using a blob descriptor and a reference prediction table after history information is input; reading the data in the memory based on the pre-fetch information and temporarily archiving read data in a prefetch buffer; and accessing next data in the memory based on the prefetch information and temporarily archiving the next data in the prefetch buffer after the data is transferred to the neural network from the prefetch buffer.
    Type: Grant
    Filed: November 19, 2020
    Date of Patent: December 21, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Byung Jo Kim, Joo Hyun Lee, Seong Min Kim, Ju-Yeob Kim, Jin Kyu Kim, Mi Young Lee
  • Patent number: 11204996
    Abstract: An endpoint computer system can harvest data relating to a plurality of events occurring within an operating environment of the endpoint computer system and can add the harvested data to a local data store maintained on the endpoint computer system. In some examples, the local data store can be an audit log and/or can include one or more tamper resistant features. Systems, methods, and computer program products are described.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: December 21, 2021
    Assignee: Cylance Inc.
    Inventors: Ryan Permeh, Matthew Wolff, Samuel John Oswald, Xuan Zhao, Mark Culley, Steve Polson
  • Patent number: 11182286
    Abstract: A high-performance data storage device is disclosed. A non-volatile memory stores a logical-to-physical address mapping table that maps logical addresses recognized by a host to a physical space in the non-volatile memory. The logical-to-physical address mapping table is divided into a plurality of sub mapping tables. A memory controller utilizes temporary storage when controlling the non-volatile memory. The memory controller plans a sub mapping table area in the temporary storage to store sub mapping tables corresponding to a plurality of nodes which are linked and managed by multiple linked lists.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: November 23, 2021
    Assignee: SILICON MOTION, INC.
    Inventors: Jian-Yu Chen, Bo-Yan Jhan, Yuh-Jang Lo, Shih-Chang Chang
  • Patent number: 11176057
    Abstract: An indication is received from a host application of a first minimum retention time in a cache comprising a first type of memory and a second type of memory for a first plurality of tracks, wherein the first minimum retention time is not indicated for a second plurality of tracks. Based on the first minimum retention time, a second minimum retention time is set for the first plurality of tracks for the first type of memory and a third minimum retention time is set for the first plurality of tracks for the second type of memory. A track of the first plurality of tracks is demoted from the first type of memory, in response to determining that the track is a least recently used (LRU) track in a LRU list of tracks in the first type of memory and the track has been in the first type of memory for a time that exceeds the second minimum retention time.
    Type: Grant
    Filed: February 13, 2020
    Date of Patent: November 16, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh Mohan Gupta, Matthew G. Borlick, Beth Ann Peterson, Kyler A. Anderson
  • Patent number: 11176052
    Abstract: A method for improving cache hit ratios for selected volumes within a storage system is disclosed. In one embodiment, such a method includes storing, in a cache of a storage system, non-favored storage elements and favored storage elements. The favored storage elements are retained in the cache longer than the non-favored storage elements. The method maintains a “non-favored” LRU list that contains entries associated with non-favored storage elements and designates an order in which the non-favored storage elements are evicted from the cache. The method also maintains one or more “favored” LRU lists that contain entries associated with favored storage elements and designate an order in which the favored storage elements are evicted from the cache. Each “favored” LRU list is associated with favored storage elements that have a different preferred residency time in the cache. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: May 12, 2019
    Date of Patent: November 16, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kevin J. Ash, Beth A. Peterson, Kyler A. Anderson
  • Patent number: 11176138
    Abstract: Caching runtime plan data that is determined not to change for different invocations of a query plan. In some embodiments, a computing system accesses information that specifies a query plan generated for a first database query and generates a first runtime plan for the first database query based on the query plan. In some embodiments, the system caches information generated for the first runtime plan that is determined not to change for different invocations of the query plan. For example, transformation code may include separate functions for mutable and immutable state. In some embodiments, the system retrieves and uses the cached information to generate a second runtime plan for a second database query. Disclosed techniques may improve performance of query plan transformations that hit in the runtime plan cache.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: November 16, 2021
    Assignee: salesforce.com, inc.
    Inventors: Punit B. Shah, Douglas Doole, Rama K Korlapati, Serge P. Rielau
  • Patent number: 11176049
    Abstract: A flash memory controller includes a processor and a cache. When the processor receives a specific write command and specific data a host, the processor stores the specific data into a region of the cache, and the processor generates host-based cache information or flash-memory-based cache information to build or update/optimize a binary tree with fewer number of nodes to improve the searching speed of the binary tree, reducing computation overhead of multiple cores in the flash memory controller, and minimizing the number of accessing the cache to reduce the total latency wherein the host-based cache information may indicate dynamic data length and flash-memory-based cache information indicates the data length of one writing unit such as one page in flash memory chip.
    Type: Grant
    Filed: March 17, 2020
    Date of Patent: November 16, 2021
    Assignee: Silicon Motion, Inc.
    Inventor: Kuan-Hui Li
  • Patent number: 11169710
    Abstract: A media management system including an application layer, a system layer, and a solid state drive (SSD) storage layer. The application layer includes a media data analytics application configured to assign a classification code to a data file. The system layer is in communication with the application layer. The system layer includes a file system configured to issue a write command to a SSD controller. The write command includes the classification code of the data file. The SSD storage layer includes the SSD controller and erasable blocks. The SSD controller is configured to write the data file to one of the erasable blocks based on the classification code of the data file in the write command. In an embodiment, the SSD controller is configured to write the data file to one of the erasable blocks storing other data files also having the classification code.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: November 9, 2021
    Assignee: Futurewei Technologies, Inc.
    Inventors: Yiren Huang, Yong Wang, Kui Lin
  • Patent number: 11163771
    Abstract: A sequence object manager provides a sequence object with a dynamic cache block size that indicates a block size of values to be added to the sequence object when the cache values are exhausted. The dynamic block size allows the sequence object manager to optimize performance and storage space depending on applications using the sequence object. The dynamically block size is set and maintained by the sequence object manager based on observed performance and historical trends of the applications. A seed value may be provided by the user to initially set the dynamic block size.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: November 2, 2021
    Assignee: International Business Machines Corporation
    Inventors: Rafal P. Konik, Robert J. Bestgen, Shawn J. Baranczyk, Roger A. Mittelstadt
  • Patent number: 11157179
    Abstract: A power requirement associated with a storage device of the plurality of storage devices is determined. A set of blocks of the storage device is allocated for storage of data, wherein the set of blocks of the storage device is less than the power requirement of the storage device. User data to be stored at the storage system is received. The user data is assigned to the set of blocks for storage at the storage device.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: October 26, 2021
    Assignee: PURE STORAGE, INC.
    Inventors: Andrew Bernat, Wei Tang
  • Patent number: 11151053
    Abstract: A computer-implemented method, according to one embodiment, is for maintaining heat information of data while in a cache. The computer-implemented method includes: transferring data from non-volatile memory to the cache, such that the data is stored in a first page in the cache. Previous read and/or write heat information associated with the data is maintained by preserving one or more bits in a hash table which correspond to the data in the first page. Moreover, the data is destaged from the first page in the cache to the non-volatile memory, and the one or more bits in the hash table which correspond to the data are updated to reflect current read and/or write heat information associated with the data.
    Type: Grant
    Filed: August 7, 2019
    Date of Patent: October 19, 2021
    Assignee: International Business Machines Corporation
    Inventors: Nikolas Ioannou, Nikolaos Papandreou, Roman Alexander Pletka, Sasa Tomic, Radu Ioan Stoica, Timothy Fisher, Aaron Daniel Fry, Charalampos Pozidis, Andrew D. Walls
  • Patent number: 11144474
    Abstract: A computational device receives an indication that specifies a maximum retention time in cache for a first plurality of tracks, wherein no maximum retention time is specified for a second plurality of tracks. A plurality of insertions points are generated in a least recently used (LRU) list, wherein different insertion points in the LRU list correspond to different amounts of time that a track of the first plurality of tracks is expected to be retained in the cache, wherein the LRU list is configured to demote both tracks of the first plurality of tracks and the second plurality of tracks from the cache.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: October 12, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh M. Gupta, Joseph Hayward, Kyler A. Anderson, Matthew G. Borlick
  • Patent number: 11138120
    Abstract: A memory system includes: a first memory module including first volatile memories; a second memory module including second volatile memories, non-volatile memories and a module controller; a memory controller controlling the first and second memory modules through second and third control buses, respectively; and a switch array electrically coupling the second and third control buses, wherein the module controller controls the switch array to electrically couple the second and third control buses in a backup operation for backing up data of the first volatile memories to the non-volatile memories, wherein the first and second memory modules include one or more first memory stacks and one or more second memory stacks, respectively, wherein the first volatile memories are stacked in the first memory stacks, and wherein the second volatile memories, the non-volatile memories and the module controller are stacked in the second memory stacks.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: October 5, 2021
    Assignee: SK hynix Inc.
    Inventors: Yong-Woo Lee, Min-Chang Kim, Chang-Hyun Kim, Do-Yun Lee, Jae-Jin Lee, Hun-Sam Jung, Chan-Jong Woo
  • Patent number: 11138172
    Abstract: The disclosed embodiments include data storage systems and methods to store data. In one embodiment, a computer-implemented method for storing data is disclosed. The method includes receiving a data value of a dataset. The method also includes assigning a plurality of keys of a key space to a plurality of data values of the dataset. The method further includes determining whether to readjust storage space of at least one partition of a plurality of partitions based on data values stored on the plurality of partitions. In response to a determination to readjust the at least one partition, the method further includes dynamically re-mapping the key space to readjust a number of keys of the plurality of keys that are assigned to data values stored on the plurality of partitions based on a number of data values that are stored on the plurality of partitions.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: October 5, 2021
    Assignee: MASERGY COMMUNICATIONS, INC.
    Inventor: Michael Roy Stute
  • Patent number: 11132213
    Abstract: Systems and methods are described for transforming a data set within a data source into a series of task calls to an on-demand code execution environment. The environment can utilize pre-initialized virtual machine instances to enable execution of user-specified code in a rapid manner, without delays typically caused by initialization of the virtual machine instances, and are often used to process data in near-real time, as it is created. However, limitations in computing resources may inhibit a user from utilizing an on-demand code execution environment to simultaneously process a large, existing data set. The present application provides a task generation system that can iteratively retrieve data items from an existing data set and generate corresponding task calls to the on-demand computing environment. The calls can be ordered to address dependencies of the data items, such as when a first data item depends on prior processing of a second data item.
    Type: Grant
    Filed: March 30, 2016
    Date of Patent: September 28, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Timothy Allen Wagner, Marc John Brooker, Ajay Nair
  • Patent number: 11126544
    Abstract: A non-volatile memory (NVM) apparatus and a garbage collection method thereof are provided. The NVM apparatus includes a NVM and a controller. The controller is coupled to the NVM. The controller accesses the NVM according to a logical address of a write command of a host. The controller performs the garbage collection method to release space occupied by invalid data. The garbage collection method includes: grouping a plurality of blocks of the NVM into a plurality of tiers according to hotness of data, moving valid data in one closed source block of a hotter tier among the tiers to one open target block of a cooler tier among the tiers, and erasing the closed source block of the hotter tier to release space.
    Type: Grant
    Filed: December 14, 2016
    Date of Patent: September 21, 2021
    Assignee: VIA Technologies, Inc.
    Inventors: Ying Yu Tai, Jiangli Zhu
  • Patent number: 11121928
    Abstract: A technique for determining a data window size allows a set of predicted blocks to be transmitted along with requested blocks. A stream enabled application executing in a virtual execution environment may use the blocks when needed.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: September 14, 2021
    Assignee: NUMECENT HOLDINGS, INC.
    Inventors: Jeffrey DeVries, Arthur S. Hitomi
  • Patent number: 11113208
    Abstract: A method is provided that includes searching tags in a tag group comprised in a tagged memory system for an available tag line during a clock cycle, wherein the tagged memory system includes a plurality of tag lines having respective tags and wherein the tags are divided into a plurality of non-overlapping tag groups, and searching tags in a next tag group of the plurality of tag groups for an available tag line during a next clock cycle when the searching in the tag group does not find an available tag line.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: September 7, 2021
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventor: Sureshkumar Govindaraj