Entry Replacement Strategy Patents (Class 711/133)
  • Patent number: 11157319
    Abstract: A processor includes processor memory arrays including one or more volatile memory arrays and one or more Non-Volatile Memory (NVM) arrays. Volatile memory locations in the one or more volatile memory arrays are paired with respective NVM locations in the one or more NVM arrays to form processor memory pairs. Process data is stored for different processes executed by at least one core of the processor in respective processor memory pairs. Processes are executed using the at least one core to directly access the process data stored in the respective processor memory pairs.
    Type: Grant
    Filed: June 6, 2018
    Date of Patent: October 26, 2021
    Assignee: Western Digital Technologies, Inc.
    Inventors: Viacheslav Dubeyko, Luis Cargnini
  • Patent number: 11157286
    Abstract: Representative apparatus, method, and system embodiments are disclosed for a self-scheduling processor which also provides additional functionality. Representative embodiments include a self-scheduling processor, comprising: a processor core adapted to execute instructions; and a core control circuit adapted to automatically schedule an instruction for execution by the processor core in response to a received work descriptor data packet. In a representative embodiment, the processor core is further adapted to execute a non-cached load instruction to designate a general purpose register rather than a data cache for storage of data received from a memory circuit. The core control circuit is also adapted to schedule a fiber create instruction for execution by the processor core, and to generate one or more work descriptor data packets to another circuit for execution of corresponding execution threads.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: October 26, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Tony M. Brewer
  • Patent number: 11157418
    Abstract: A method for improving cache hit ratios dedicates, within a cache, a portion of the cache to prefetched data elements. The method maintains a high priority LRU list designating an order in which high priority prefetched data elements are demoted, and a low priority LRU list designating an order in which low priority prefetched data elements are demoted. The method calculates, for the high priority LRU list, a first score based on a first priority and a first cache hit metric. The method calculates, for the low priority LRU list, a second score based on a second priority and a second cache hit metric. The method demotes, from the cache, a prefetched data element from the high priority LRU list or the low priority LRU list depending on which of the first score and the second score is lower. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: February 9, 2020
    Date of Patent: October 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Matthew G. Borlick, Beth A. Peterson, Kyler A. Anderson
  • Patent number: 11151000
    Abstract: Example embodiments relate generally to systems and methods for continuous data protection (CDP) and more specifically to an input and output (I/O) filtering framework and log management system to seek a near-zero recovery point objective (RPO).
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: October 19, 2021
    Assignee: RUBRIK, INC.
    Inventors: Benjamin Travis Meadowcroft, Li Ding, Shaomin Chen, Hardik Vohra, Arijit Banerjee, Abhay Mitra, Kushaagra Goyal, Arnav Gautum Mishra, Samir Rishi Chaudhry, Suman Swaroop, Kunal Sean Munshani
  • Patent number: 11151035
    Abstract: A method for improving cache hit ratios for selected storage elements within a storage system is disclosed. In one embodiment, such a method includes storing, in a cache of a storage system, non-favored storage elements and favored storage elements. The favored storage elements are retained in the cache longer than the non-favored storage elements. The method maintains a first LRU list containing entries associated with non-favored storage elements and designating an order in which the non-favored storage elements are evicted from the cache, and a second LRU list containing entries associated with favored storage elements and designating an order in which the favored storage elements are evicted from the cache. The method moves entries between the first LRU list and the second LRU list as favored storage elements are changed to non-favored storage elements and vice versa. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: May 12, 2019
    Date of Patent: October 19, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Beth A. Peterson, Kevin J. Ash, Kyler A. Anderson
  • Patent number: 11153384
    Abstract: A method by a computing device of a dispersed storage network (DSN) begins by determining whether alternate form data (AFD) exists for a data object. When the alternate form data does not exist, the method continues by identifying a content derivation function in accordance with an AFD policy of the DSN. The method continues by identifying a portion of the data object based on the content derivation function and identifying one or more sets of encoded data slices of a plurality of sets of encoded data slices corresponding to the portion of the data object. The method continues by generating at least a portion of the AFD based on the one or more sets of encoded data slices. The method continues by storing the at least a portion of the AFD within memory of the DSN in accordance with a storage approach.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: October 19, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Wesley B. Leggette, Manish Motwani, Brian F. Ober, Jason K. Resch
  • Patent number: 11144456
    Abstract: An apparatus includes a CPU core and a L1 cache subsystem including a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem including a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller configured to receive a read request from the L1 controller as a single transaction. Read request includes a read address, a first indication of an address and a coherence state of a cache line A to be moved from the L1 main cache to the L1 victim cache to allocate space for data returned in response to the read request, and a second indication of an address and a coherence state of a cache line B to be removed from the L1 victim cache in response to the cache line A being moved to the L1 victim cache.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: October 12, 2021
    Assignee: Texas Instmments Incorporated
    Inventors: Abhijeet Ashok Chachad, David Matthew Thompson, Naveen Bhoria, Peter Michael Hippleheuser
  • Patent number: 11144458
    Abstract: An apparatus (2) comprises processing circuitry (4) for performing data processing in response to instructions. The processing circuitry (4) supports a cache maintenance instruction (50) specifying a virtual page address (52) identifying a virtual page of a virtual address space. In response to the cache maintenance instruction, the processing circuitry (4) triggers at least one cache (18, 20, 22) to perform a cache maintenance operation on one or more cache lines for which a physical address of the data stored by the cache line is within a physical page that corresponds to the virtual page identified by the virtual page address provided by the cache maintenance instruction.
    Type: Grant
    Filed: January 12, 2016
    Date of Patent: October 12, 2021
    Assignee: ARM LIMITED
    Inventors: Jason Parker, Bruce James Mathewson, Matthew Lucien Evans
  • Patent number: 11106601
    Abstract: A method for efficiently method for performing adaptive management of a cache with predetermined size and number of cells with different locations with respect to the top or bottom of the cache, for storing at different cells, data items to be retrieved upon request from a processor. A stream of requests for items, each of which has a temporal probability to be requested is received and the jump size is incremented on cache misses and decremented on cache hits by automatically choosing a smaller jump size and using a larger jump size when the probability of items to be requested is changed. The jump size represents the number of cells by which a current request is promoted in the cache, on its way from the bottom, in case of a cache hit, or from the outside in case of a cache miss, towards the top cell of the cache.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: August 31, 2021
    Assignee: B. G. NEGEV TECHNOLOGIES AND APPLICATIONS LTD., AT BEN-GURION UNIVERSITY
    Inventors: Shlomi Dolev, Daniel Berend, Marina Kogan-Sadetsky
  • Patent number: 11106585
    Abstract: A method, computer program product, and computer system for receiving, by a computing device, an IO request on a first node. It may be determined whether a virtual address for the IO request is in a virtual cache. A read to RAID may be issued using the virtual address when the virtual address for the IO request is not in the virtual cache. A return of a cached page associated with the virtual address may be issued when the virtual address for the IO request is in the virtual cache.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: August 31, 2021
    Assignee: EMC IP Holding Company, LLC
    Inventors: Anton Kucherov, Ronen Gazit, Oran Baruch
  • Patent number: 11086792
    Abstract: This disclosure provides a cache replacing method applied to a heterogeneous multi-core system, the method including: determining whether a first application currently running is an application running on the GPU; when it is determined that the first application currently running is an application running on the GPU, determining a cache priority of first data accessed by the first application according to a performance parameter of the first application, the cache priority of the first data including a priority other than a predefined highest cache priority; and caching the first data into a cache queue of the shared cache according to a predetermined cache replacement algorithm and the cache priority of the first data, and replacing data in the cache queue.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: August 10, 2021
    Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Qingwen Fan, Jinghua Miao, Hao Zhang, Lili Chen, Lixin Wang, Bin Zhao, Yukun Sun, Xuefeng Wang, Xi Li, Wenyu Li, Jinbao Peng, Jianwen Suo
  • Patent number: 11086787
    Abstract: A system and method of handling data access demands in a processor virtual cache that includes: determining if a virtual cache data access demand missed because of a difference in the context tag of the data access demand and a corresponding entry in the virtual cache with the same virtual address as the data access demand; in response to the virtual cache missing, determining whether the alias tag valid bit is set in the corresponding entry of the virtual cache; in response to the alias tag valid bit not being set, determining whether the virtual cache data access demand is a synonym of the corresponding entry in the virtual cache; and in response to the virtual access demand being a synonym of the corresponding entry in the virtual cache with the same virtual address but a different context tag, updating information in a tagged entry in an alias table.
    Type: Grant
    Filed: February 5, 2019
    Date of Patent: August 10, 2021
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Bryan Lloyd
  • Patent number: 11080193
    Abstract: A method for improving the execution time of a computer application comprises at least one cycle includes: a step of determining the type of memory access time sequence occurring during execution of the computer application; a step of preloading data, from a file system to a cache memory system, according to the determined type of memory access time sequence. The determination step is carried out by a learning model having been previously configured using a database of certain predetermined types of memory access time sequences.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: August 3, 2021
    Assignee: BULL SAS
    Inventors: Trong Ton Pham, Lionel Vincent, Grégoire Pichon
  • Patent number: 11080471
    Abstract: A system includes a request processing module configured to receive a chart description request associated with an asset. The system also includes a rules store configured to store rules, a rules application module configured to determine a set of rules and a set of results by applying each rule in the set of rules to chart data associated with the asset, a rules selector module configured to select a subset of results from the set of results, a text generation module configured to generate a text description based on the subset of results, and an output module configured to transmit the text description. Each of the rules includes a relevancy score. The selection of the subset of results is based on the score of each rule associated with the set of results. The description describes a chart associated with the asset.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: August 3, 2021
    Assignee: TD Ameritrade IP Company, Inc.
    Inventors: Chesley Carl Spencer, Chad Michael Cocco
  • Patent number: 11074015
    Abstract: According to one embodiment, a memory system receives from a host read commands each designating both of a block address of a read target block and a read target storage location in the read target block, and executes a data read operation in accordance with each of the received read commands. In response to receiving from the host a first command to transition a first block to which data is already written to a reusable state of being reusable as a new write destination block, the memory system determine whether an incomplete read command designating a block address of the first block exists or not. In a case where the incomplete read command exists, the memory system executes the first command after execution of the incomplete read command is completed.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: July 27, 2021
    Assignee: TOSHIBA MEMORY CORPORATION
    Inventor: Shinichi Kanno
  • Patent number: 11068415
    Abstract: Provided are a computer program product, system, and method for using insertion points to determine locations in a cache list at which to move processed tracks. There are a plurality of insertion points to a cache list for the cache having a least recently used (LRU) end and a most recently used (MRU) end, wherein each insertion point of the insertion points identifies a track in the cache list. An insertion point of the insertion points is determined at which to move the processed track in response to determining that a processed track is indicated to move to the MRU end. The processed track is indicated at a position in the cache list with respect to the determined insertion point.
    Type: Grant
    Filed: August 7, 2019
    Date of Patent: July 20, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew J. Kalos
  • Patent number: 11068206
    Abstract: A data storage device includes a nonvolatile memory device and a memory having an unmap command queue configured to store an unmap command received from a host, and a sequential unmap table configured to store a sequential unmap entry corresponding to an unmap command for sequential logical addresses, and a controller including a first core and a second core. The second core configured to read an unmap-target map segment including the sequential logical addresses from an address mapping table stored in the nonvolatile memory device, store the read unmap-target map segment in the memory, and change, within the stored unmap-target map segment, physical addresses mapped to the sequential logical addresses to trim instruction data at the same time, the trim instruction data being included in the sequential map entry.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: July 20, 2021
    Assignee: SK hynix Inc.
    Inventors: Young Ick Cho, Sung Kwan Hong
  • Patent number: 11068402
    Abstract: Aspects store configuration version data for an application into a shared cache in a structured data format; in response to a request at run-time for the configuration version data, determine whether run-time format data of the configuration version data is stored in a different, local cache; and in response to determining that the run-time format configuration version data is not stored in the local cache, during execution of the application, read the structured data format data from the shared cache, translate the read data into the run-time data format, store the translated data into the local cache in the run-time format file and return the configuration version run-time format data stored within the local cache in satisfaction of the request at run-time for the configuration version data of the application.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: July 20, 2021
    Assignee: ADP, LLC
    Inventors: Stephen Dale Garvey, Gregory Fincannon
  • Patent number: 11061600
    Abstract: Exemplary methods and apparatus are disclosed to select data evacuation policies for use by a solid state device (SSD) to relocate data from an upper (high performance) memory tier to a lower memory tier. The upper tier may be, e.g., a single-layer cell (SLC) tier of a multi-tier NAND memory, whereas the lower tier may be, e.g., a triple-layer cell (TLC) or a quad-level cell (QLC) tier of the NAND memory. In one example, the SSD monitors its recent input/output (I/O) command history. If a most recent command was a read command, the SSD performs a “lazy” evacuation procedure to evacuate data from the upper tier storage area to the lower tier storage area. Otherwise, the SSD performs a “greedy” or “eager” evacuation procedure to evacuate the data from the upper tier to the lower tier. Other evacuation selection criteria are described herein based, e.g., upon predicting upcoming I/O commands.
    Type: Grant
    Filed: January 10, 2019
    Date of Patent: July 13, 2021
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Noga Deshe, Gadi Vishne
  • Patent number: 11061613
    Abstract: A computer readable storage device includes a first memory section that stores operational instructions that, when executed, cause a computing device to, as data accesses occur for a plurality of data objects of a storage container, update object values to produce updated object values, update object retention costs to produce updated object retention costs, adjust a dynamic retention threshold based on the updated object values and the updated object retention costs and update a data object retention policy for a data object based on the dynamic retention threshold to produce an updated retention policy for the data object. The computer readable storage device includes a second memory section that stores operational instructions that, when executed, cause the computing device to, when a data access is a deletion request, utilizing a current updated data object retention policy to determine and execute a deletion-retention option for the data object.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: July 13, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Andrew D. Baptist, Bart R. Cilfone, Greg R. Dhuse, Harsha Hegde, Wesley B. Leggette, Manish Motwani, Jason K. Resch, Ilya Volvovski, Ethan S. Wozniak
  • Patent number: 11048684
    Abstract: Systems, methods, and computer-readable media for lazy tracking mechanisms for web caching systems are provided. The lazy tracking mechanism may track and perform asynchronous (async) computation of dirty records for client-side caching mechanisms. The async computation of dirty records may include tracking or accounting for invalidated records relevant to a particular client or user system. Invalidation messages may be sent to client/user systems in response to receipt of a request for updated records, or in response to a request for a particular item. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: January 16, 2018
    Date of Patent: June 29, 2021
    Assignee: SALESFORCE.COM, INC.
    Inventors: Vishal Motwani, Nick Hansen, Vivek Chauhan, Thomas Archie Cook, Jr., Thomas Keeney, Kamyar Seradjfar
  • Patent number: 11039176
    Abstract: Cache management techniques are described for a content distribution network (CDN), for example, a video on demand (VOD) system supporting user requests and delivery of video content. A preferred cache size may be calculated for one or more cache devices in the CDN, for example, based on a maximum cache memory size, a bandwidth availability associated with the CDN, and a title dispersion calculation determined by the user requests within the CDN. After establishing the cache with a set of assets (e.g., video content), an asset replacement algorithm may be executed at one or more cache devices in the CDN. When a determination is made that a new asset should be added to a full cache, a multi-factor comparative analysis may be performed on the assets currently residing in the cache, comparing the popularity and size of assets and combinations of assets, along with other factors to determine which assets should be replaced in the cache device.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: June 15, 2021
    Assignee: Comcast Cable Communications, LLC
    Inventors: Volnie Whyte, Amit Garg, Tom Brown, Robert Gaydos, Mark Muehl
  • Patent number: 11032204
    Abstract: Capacity enhancement of a direct communication link using a variable redundancy delivery network. An estimated information rate between a source node and a terminal node may be partitioned into a first information rate provided via the direct communication link and a second information rate to be provided via the variable redundancy delivery network. One or more parameters of the variable redundancy delivery network may be calculated to provide the second information rate based on a non-uniform probability density of messages requested by the terminal node. Capacity and reliability of storage media devices in the variable redundancy delivery network may be traded off to provide the second information rate. The variable redundancy delivery network may implement various coding schemes and per-message coding rates that may be determined based on the non-uniform probability distribution of the source message library.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: June 8, 2021
    Assignee: Viasat, Inc.
    Inventor: Nirmalkumar Velayudhan
  • Patent number: 11016892
    Abstract: The present disclosure provides a cache system and an operating method thereof. The system includes an upper-level cache unit and a last level cache (LLC). The LLC includes a directory, a plurality of counters, and a register. The directory includes a status indicator recording a utilization status of the upper-level cache unit to the LLC. The counters are used to increase or decrease a counting value according to a variation of the status indicator, record an access number from the upper-level cache unit, and record a hit number of the upper-level cache unit accessing the LLC. According to the counting value, the access number, and the hit number, the first parameters of the register are controlled, so as to adjust a utilization strategy to the LLC.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: May 25, 2021
    Assignee: Shanghai Zhaoxin Semiconductor Co., Ltd.
    Inventors: Xianpei Zheng, Zhongmin Chen, Weilin Wang, Jiin Lai, Mengchen Yang
  • Patent number: 11003496
    Abstract: In one embodiment, performance-based multi-mode task dispatching for high temperature avoidance in accordance with the present description, includes selecting processor cores as available to receive a dispatched task. Tasks are dispatched to a set of available processor cores for processing in a performance-based dispatching mode. If monitored temperature rises above a threshold temperature value, task dispatching logic switches to a thermal-based dispatching mode. If a monitored temperature falls below another threshold temperature value, dispatching logic switches back to the performance-based dispatching mode. If a monitored temperature of an individual processor core rises above a threshold temperature value, the processor core is redesignated as unavailable to receive a dispatched task. If the temperature of an individual processor core falls below another threshold temperature value, the processor core is redesignated as available to receive a dispatched task.
    Type: Grant
    Filed: August 2, 2017
    Date of Patent: May 11, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthew G. Borlick, Lokesh M. Gupta, Trung N. Nguyen
  • Patent number: 10999235
    Abstract: An interface is provided for controlling transfer of electronic transaction messages between a financial institution and switches distributed amongst a plurality of switch sites. The financial institution and the switches are connected via a data communications network. Communication circuitry is operable to transmit a test message to one of the switch sites over the data network if no transaction message is received from that switch site for a predetermined time. And, in response to the test message, the communication circuitry is operable to receive an echo of the test message from the switch site. If the echo is received within a defined time, processing circuitry is operable to then set the operational status of the switch site as operational, and if the echo is not received within the defined time, the operational status of the switch site is set as not operational.
    Type: Grant
    Filed: April 22, 2020
    Date of Patent: May 4, 2021
    Assignee: IPCO 2012 LIMITED
    Inventors: Steven George Garlick, Neil Antony Masters
  • Patent number: 10992569
    Abstract: Internet protocol packets are statelessly identified as associated with a particular session-instance by identifying a key, or session-instance identifier, within the data (or payload) portion of a user plane packet. This identifier is specific to the session-instance and remains constant throughout the session-instance. Using this stateless identification, transmitted user plane packets are automatically routed at the transmission speed of the transmission link using a method that automatically balances the analysis processing load between network probes. The load is balanced by routing the user plane packet to a network probe that is either already analyzing the session-instance or by routing the user plane packet to a system that has processing capacity to analyze a new session-instance. The network probe then analyzes the user plane packet and the session-instance to measure the quality of the user experience of the session-instance and performance of the network.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: April 27, 2021
    Assignee: NetScout Systems, Inc.
    Inventor: Bruce A. Kelley, Jr.
  • Patent number: 10977135
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for consensus system downtime recovery. One of the methods includes: multicasting a pre-prepare message to at least some of the backup nodes; obtaining (Q?1) or more prepare messages respectively from (Q?1) or more of the backup nodes, wherein the prepare messages each indicate an acceptance of the pre-prepare message by the corresponding backup node; storing the pre-prepare message and the (Q?1) or more prepare messages; multicasting a commit message to at least some of the backup nodes, the commit message indicating that the primary node agrees to the (Q?1) or more prepare messages; and obtaining, respectively from Q or more nodes among the primary node and the backup nodes, Q or more commit messages each indicating that the corresponding node agrees to (Q?1) or more prepare messages received by the corresponding node.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: April 13, 2021
    Assignee: ADVANCED NEW TECHNOLOGIES CO., LTD.
    Inventor: Dayi Yang
  • Patent number: 10977180
    Abstract: A method, storage system and non-transitory computer readable medium. The method may include receiving or generating, and for each storage entity out of multiple storage entities of the storage system, a storage entity distribution of cache hits over a caching period related to cached data associated with the storage entity and determining an allocation of quotas of the cache space to the multiple storage entities. The determining may include: (a) for each storage entity, determining a hit score indicative of a number of cache hits per a caching sub-period of the caching period related to the storage entity; (b) simulating, in an iterative manner, an allocation of quotas of the cache space to the storage entities that substantially maximizes the number of cache hits; and (c) allocating quotas of the cache space to the storage entities of the multiple storage entities, based on an outcome of the simulation.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: April 13, 2021
    Inventor: Yechiel Yochai
  • Patent number: 10977179
    Abstract: A method and apparatus for cache management and eviction polices using unsupervised reinforcement learning schemes is disclosed.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: April 13, 2021
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Opher Lieber, Ariel Navon, Alexander Bazarsky, Shay Benisty
  • Patent number: 10970223
    Abstract: Systems, apparatuses, and methods for efficiently allocating data in a cache are described. In various embodiments, a processor decodes an indication in a software application identifying a temporal data set. The data set is flagged with a data set identifier (DSID) indicating temporal data to drop after consumption. When the data set is allocated in a cache, the data set is stored with a non-replaceable attribute to prevent a cache replacement policy from evicting the data set before it is dropped. A drop command with an indication of the DSID of the data set is later issued after the data set is read (consumed). A copy of the data set is not written back to the lower-level memory although the data set is removed from the cache. An interrupt is generated to notify firmware or other software of the completion of the drop command.
    Type: Grant
    Filed: May 13, 2019
    Date of Patent: April 6, 2021
    Assignee: Apple Inc.
    Inventors: Wolfgang H. Klingauf, Kenneth C. Dyke, Karthik Ramani, Winnie W. Yeung, Anthony P. DeLaurier, Luc R. Semeria, David A. Gotwalt, Srinivasa Rangan Sridharan, Muditha Kanchana
  • Patent number: 10972577
    Abstract: Systems, methods, and storage media for managing traffic on a digital content delivery network are disclosed. Exemplary implementations may: receive an item of digital content on a digital content delivery network; assign a type category to the item of digital content; determine an update time variable of the item of digital content; determine a cache time for the item of digital content based on the type category of the item of digital content and the update time variable of the item of digital content and; and cache an instance of the item of digital content in a cache memory associated with the content delivery network for the cache time and removing the instance from the cache memory after the cache time has lapsed.
    Type: Grant
    Filed: January 21, 2020
    Date of Patent: April 6, 2021
    Assignee: CBS Interactive Inc.
    Inventors: Robert Accettura, Shimon Schwartz
  • Patent number: 10963256
    Abstract: Disclosed embodiments relate to systems and methods for performing instructions to transform matrices into a row-interleaved format. In one example, a processor includes fetch and decode circuitry to fetch and decode an instruction having fields to specify an opcode and locations of source and destination matrices, wherein the opcode indicates that the processor is to transform the specified source matrix into the specified destination matrix having the row-interleaved format; and execution circuitry to respond to the decoded instruction by transforming the specified source matrix into the specified RowInt-formatted destination matrix by interleaving J elements of each J-element sub-column of the specified source matrix in either row-major or column-major order into a K-wide submatrix of the specified destination matrix, the K-wide submatrix having K columns and enough rows to hold the J elements.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: March 30, 2021
    Assignee: Intel Corporation
    Inventors: Raanan Sade, Robert Valentine, Bret Toll, Christopher J. Hughes, Alexander F. Heinecke, Elmoustapha Ould-Ahmed-Vall, Mark J. Charney
  • Patent number: 10963392
    Abstract: A system and method for efficiently handling data selected for eviction in a computing system. In various embodiments, a computing system includes one or more processors, a system memory, and a victim cache. The cache controller of a particular cache in a cache memory subsystem includes an allocator for determining whether to allocate data evicted from the particular cache into the victim cache. The data fetched into the first cache includes data fetched to service miss requests, which includes demand requests and prefetch requests. To determine whether to allocate, the allocator determines whether a usefulness of data fetched into the particular cache exceeds a threshold. If so, the evicted data is stored in the victim cache. If not, the evicted data bypasses the victim cache. Data determined to be accessed by a processor is deemed to be of a higher usefulness.
    Type: Grant
    Filed: July 30, 2018
    Date of Patent: March 30, 2021
    Assignee: Apple Inc.
    Inventors: Sandeep Gupta, Perumal R. Subramoniam
  • Patent number: 10956070
    Abstract: A data processing system includes a plurality of processor cores each having a respective associated cache memory, a memory controller, and a system memory coupled to the memory controller. A zero request of a processor core among the plurality of processor cores is transmitted on an interconnect fabric of the data processing system. The zero request specifies a target address of a target memory block to be zeroed and has no associated data payload. The memory controller receives the zero request on the interconnect fabric and services the zero request by zeroing in the system memory the target memory block identified by the target address, such that the target memory block is zeroed without caching the zeroed target memory block in the cache memory of the processor core.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: March 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Derek E. Williams, Guy L. Guthrie, Hugh Shen
  • Patent number: 10956061
    Abstract: A computing system includes: a host configured to provide data and address information on the data; and a memory system configured to store the data, wherein the memory system comprises: a plurality of memory devices configured to be grouped into at least one memory device group; and a controller configured to control each of the plurality of memory devices, wherein the controller comprises: a group setter configured to set the memory device groups with respect to a type of the data by a request of the host; and a processor configured to read the data from, or write the data to, the memory device group corresponding to the type of the data.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: March 23, 2021
    Assignee: SK hynix Inc.
    Inventor: Jun-Seo Lee
  • Patent number: 10942860
    Abstract: A computing system using a bit counter may include a host device; a cache configured to temporarily store data of the host device, and including a plurality of sets; a cache controller configured to receive a multi-bit cache address from the host device, perform computation on the cache address using a plurality of bit counters, and determine a hash function of the cache; a semiconductor device; and a memory controller configured to receive the cache address from the cache controller, and map the cache address to a semiconductor device address.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: March 9, 2021
    Assignees: SK hynix Inc., Korea University Industry Cooperation Foundation
    Inventors: Seonwook Kim, Wonjun Lee, Yoonah Paik, Jaeyung Jun
  • Patent number: 10942629
    Abstract: The present disclosure provides methods, computer readable media, and a system (the “platform”) for recall probability-based data storage and retrieval. The platform may comprise a hierarchical data storage architecture having at least one of the following storage tiers: a first tier, and a second tier; at least one computing agent, wherein the at least one computing agent is configured to: compute a recall probability for a data element stored in the data storage, and effect a transfer of the data element based on, at least in part, the recall probability, wherein the transfer of the data element is between at least the following: the first tier, and the second tier; and a graphical user interface (GUI) comprising at least one functional GUI element configured to: enable an end-user to specific a desired balance between at least one of the following elements: speed of data retrieval, and cost of data storage.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: March 9, 2021
    Assignee: Laitek, Inc.
    Inventors: Cameron Brackett, Barry Brown, Razvan Costea-Barlutiu
  • Patent number: 10936539
    Abstract: Provided are systems and methods for linking source data fields to target inputs having a different data structure. In one example, the method may include receiving a request to load a data file from a source data structure to a target data structure, identifying a plurality of target inputs of the target data structure, wherein the plurality of target inputs include a format of the target data structure, and at least one of the target inputs has a format that is different from a format of a source data structure, dynamically linking the plurality of source data fields to the plurality of target inputs based on metadata of the plurality of source data fields, and loading the data file from the source data structure to the target data structure.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: March 2, 2021
    Assignee: SAP SE
    Inventor: Bertram Beyer
  • Patent number: 10929032
    Abstract: In a computer network in which a data storage array maintains data for at least one host computer, the host computer provides sequential access hints to the storage array. A monitoring program monitors a host application running on the host computer to detect generation of data that is likely to be sequentially accessed by the host application along with associated data. When the host application writes such data to a thinly provisioned logical production volume the monitoring program prompts a multipath IO driver to generate the sequential access hint. In response to the hint the storage array allocates a plurality of sequential storage spaces on a hard disk drive for the data and the associated data. The allocated storage locations on the hard disk drive are written in a spatial sequence that matches the spatial sequence in which the storage locations on the production volume are written.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: February 23, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Nir Sela, Gabriel Benhanokh, Arieh Don
  • Patent number: 10929385
    Abstract: Facilitating multi-level data deduplication in an elastic cloud storage environment is provided herein. A system can comprise a processor and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations. The operations can comprise performing a first deduplication on a group of data objects at a data block level of a storage device. The operations can also comprise performing a second deduplication of the group of data objects at an object level of the storage device.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: February 23, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Mikhail Danilov, Konstantin Buinov
  • Patent number: 10922229
    Abstract: Database objects are retrieved from a database and parsed into normalized cached data objects. The database objects are stored in the normalized cached data objects in a cache store, and tenant data requests are serviced from the normalized cached data objects. The normalized cached data objects include references to shared objects in a shared object pool that can be shared across different rows of the normalized cached data objects and across different tenant cache systems.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: February 16, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Subrata Biswas
  • Patent number: 10922147
    Abstract: A storage system includes a plurality of storage devices, a data structure, and a storage controller that is configured to obtain a threshold value for a synchronization object associated with the data structure. The storage controller is further configured to activate a plurality of threads. Each thread is configured to determine a count value of the synchronization object corresponding to a number of entries in the data structure and determine whether the count value of the synchronization object exceeds the threshold value plus a predetermined number of entries. In response to determining that the count value of the synchronization object exceeds the threshold value plus the predetermined number of entries, the thread is configured to perform an action.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: February 16, 2021
    Assignee: EMC IP Holding Company LLC
    Inventor: Vladimir Shveidel
  • Patent number: 10922231
    Abstract: Provided is a predictive read ahead system for dynamically prefetching content from different storage devices. The dynamic prefetching may include receiving requests to read a first set of data of first content from a first storage device at a first rate, and requests to read a first set of data of second content from a second storage device at a different second rate. The dynamic prefetching may include determining different performance for the first storage device than the second storage device, prioritizing an allocation of cache based on a first difference between the first rate and the second rate, and a second difference based on the different performance between the storage devices, and prefetching a first amount of the first content data from the first storage device and a different second amount of the second content data from the second storage device based on the first and second differences.
    Type: Grant
    Filed: October 22, 2020
    Date of Patent: February 16, 2021
    Assignee: Open Drives LLC
    Inventors: Scot Gray, Sean Lee
  • Patent number: 10915461
    Abstract: Embodiments of the present invention are directed to a computer-implemented method for cache eviction. The method includes detecting a first data in a shared cache and a first cache in response to a request by a first processor. The first data is determined to have a mid-level cache eviction priority. A request is detected from a second processor for a same first data as requested by the first processor. However, in this instance, the second processor has indicated that the same first data has a low-level cache eviction priority. The first data is duplicated and loaded to a second cache, however, the data has a low-level cache eviction priority at the second cache.
    Type: Grant
    Filed: March 5, 2019
    Date of Patent: February 9, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Robert J. Sonnelitter, III, Matthias Klein, Craig Walters, Kevin Lopes, Michael A. Blake, Tim Bronson, Kenneth Klapproth, Vesselina Papazova, Hieu T Huynh
  • Patent number: 10909038
    Abstract: There are provided in the present disclosure a cache management method for a computing device, a cache and a storage medium, the method including: storing, according to a first request sent by a processing unit of the computing device, data corresponding to the first request in a first cache line of a cache set, and setting age of the first cache line to a first initial age value according to a priority of the first request.
    Type: Grant
    Filed: December 30, 2018
    Date of Patent: February 2, 2021
    Assignee: Chengdu Haiguang Integrated Circuit Design Co. Ltd.
    Inventors: Chunhui Zhang, Jun Cao, Linli Jia, Bharath Iyer, Hao Huang
  • Patent number: 10909071
    Abstract: According to one set of embodiments, a computer system can receive a request or command to delete a snapshot from among a plurality of snapshots of a dataset, where the plurality of snapshots are stored in cloud/object storage. In response, the computer system can add the snapshot to a batch of pending snapshots to be deleted and can determine whether the size of the batch has reached a threshold. If the size of the batch has not reached the threshold, the computer system return a response to an originator of the request or command indicating that the snapshot has been deleted, without actually deleting the snapshot from the cloud/object storage.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: February 2, 2021
    Assignee: VMWARE, INC.
    Inventors: Pooja Sarda, Satish Kumar Kashi Visvanathan
  • Patent number: 10904353
    Abstract: A content serving data processing system is configured for trending topic cache eviction management. The system includes a computing system communicatively coupled to different sources of content objects over a computer communications network. The system also includes a cache storing different cached content objects retrieved from the different content sources. The system yet further includes a cache eviction module. The module includes program code enabled to manage cache eviction of the content objects in the cache by marking selected ones of the content objects as invalid in accordance with a specified cache eviction strategy, detect a trending topic amongst the retrieved content objects, and override the marking of one of the selected ones of the content objects as invalid and keeping the one of the selected ones of the content objects in the cache when the one of the selected ones of the content objects relates to the trending topic.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Al Chakra, Patrick S. O'Donnell, Kevin L. Ortega
  • Patent number: 10901908
    Abstract: The present disclosure relates to storing data in a computer system. The computer system comprising a main memory coupled to a processor and a cache hierarchy. The main memory comprises a predefined bit pattern replacing existing data of the main memory. Aspects include storing the predefined bit pattern into a reference storage of the computer system. At least one bit in a cache directory entry of a first cache line of the cache hierarchy can be set. Upon receiving a request to read the content of the first cache line, the request can be redirected to the predefined bit pattern in the reference storage based on the value of the set bit of the first cache line.
    Type: Grant
    Filed: January 16, 2019
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Wolfgang Gellerich, Peter Altevogt, Martin Bernhard Schmidt, Martin Schwidefsky
  • Patent number: 10884751
    Abstract: Systems, apparatuses, and methods for virtualizing a micro-operation cache are disclosed. A processor includes at least a micro-operation cache, a conventional cache subsystem, a decode unit, and control logic. The decode unit decodes instructions into micro-operations which are then stored in the micro-operation cache. The micro-operation cache has limited capacity for storing micro-operations. When new micro-operations are decoded from pending instructions, existing micro-operations are evicted from the micro-operation cache to make room for the new micro-operations. Rather than being discarded, micro-operations evicted from the micro-operation cache are stored in the conventional cache subsystem. This prevents the original instruction from having to be decoded again on subsequent executions.
    Type: Grant
    Filed: July 13, 2018
    Date of Patent: January 5, 2021
    Assignee: Advanced Micro Devices, Inc.
    Inventors: John Kalamatianos, Jagadish B. Kotra