Least Recently Used Patents (Class 711/136)
  • Patent number: 11853219
    Abstract: A storage controller includes a prefetch buffer configured to buffer data prefetched from a non-volatile memory during a prefetch operation, a determiner circuit configured to output one of the prefetched data and normal data read from the non-volatile memory, as read data, and a prefetch control circuit configured to enable the prefetch operation during a first time when a sequential read operation is performed on the non-volatile memory, disable the prefetch operation at a second point after the first time, and enable the prefetch operation or maintain the disable of the prefetch operation according to performance of the read data in a prefetch suspend period after the second time in which the prefetch operation is disabled.
    Type: Grant
    Filed: October 7, 2021
    Date of Patent: December 26, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Minwoo Kim, Daekyu Park
  • Patent number: 11856031
    Abstract: A method for processing network communications, the method including receiving a network packet at a network device and performing at least one lookup for the packet in one or more first lookup tables in which the one or more first lookup tables are programmed to include at least one of an exact match or longest prefix match (LPM) table entry. The method includes obtaining a security source segment and a security destination segment based upon the result of the at least one lookup for the packet in the one or more first lookup tables. The method further includes performing a lookup in a second lookup table based upon the security source segment and security destination segment in which the second lookup table is programmed in a content addressable memory. Based upon the result of the lookup in the second lookup table, processing a forwarding decision for the packet according to the security source segment and security destination segment.
    Type: Grant
    Filed: November 8, 2022
    Date of Patent: December 26, 2023
    Assignee: Arista Networks, Inc.
    Inventor: Adam James Sweeney
  • Patent number: 11797435
    Abstract: A zone is loaded onto a first memory component of a storage system, wherein the zone comprises one or more regions of data blocks comprising a first plurality of logical block addresses (LBAs), and a snapshot of each of the one or more regions is stored on a second memory component of the storage system and is associated with a version identifier. A particular version identifier associated with a respective snapshot of a region is identified, and a set of journals stored on the second memory component are identified, wherein the set of journals comprise a second plurality of LBAs mapped to a second plurality of physical block addresses.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: October 24, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Daniel A. Boals, Byron D. Harris, Karl D. Schuh, Amy L. Wohlschlegel
  • Patent number: 11768779
    Abstract: Systems, apparatuses, and methods for cache management based on access type priority are disclosed. A system includes at least a processor and a cache. During a program execution phase, certain access types are more likely to cause demand hits in the cache than others. Demand hits are load and store hits to the cache. A run-time profiling mechanism is employed to find which access types are more likely to cause demand hits. Based on the profiling results, the cache lines that will likely be accessed in the future are retained based on their most recent access type. The goal is to increase demand hits and thereby improve system performance. An efficient cache replacement policy can potentially reduce redundant data movement, thereby improving system performance and reducing energy consumption.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: September 26, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Jieming Yin, Yasuko Eckert, Subhash Sethumurugan
  • Patent number: 11762765
    Abstract: A zone is loaded onto a first memory component of a storage system, wherein the zone comprises one or more regions of data blocks comprising a first plurality of logical block addresses (LBAs), and a snapshot of each of the one or more regions is stored on a second memory component of the storage system and is associated with a version identifier. A particular version identifier associated with a respective snapshot of a region is identified, and a set of journals stored on the second memory component are identified, wherein the set of journals comprise a second plurality of LBAs mapped to a second plurality of physical block addresses.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: September 19, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Daniel A. Boals, Byron D. Harris, Karl D. Schuh, Amy L. Wohlschlegel
  • Patent number: 11714725
    Abstract: A device comprising a memory controller coupled to a non-volatile memory (NVM) device with a shadow tracker memory region. The controller comprises a low-overhead and low recovery time for integrity-protected systems by recovering a secure metadata cache. The controller is configured to persistently track addresses of blocks in the secure metadata cache in the NVM device when a miss occurs, and track the persistent addresses, after the miss. The controller is configured to rebuild affected parts of the secure metadata cache associated with the persistent addresses in the NVM device. A system is provided which includes the memory controller interfaced with an NVM device with the shadow tracker memory region.
    Type: Grant
    Filed: June 3, 2020
    Date of Patent: August 1, 2023
    Assignee: UNIVERSITY OF CENTRAL FLORIDA RESEARCH FOUNDATION, INC.
    Inventors: Kazi Abu Zubair, Amro Awad
  • Patent number: 11663144
    Abstract: A method for improving cache hit ratios for selected storage elements within a storage system includes storing, in a cache of a storage system, non-favored storage elements and favored storage elements. The favored storage elements are retained in the cache longer than the non-favored storage elements. The method maintains a first LRU list containing entries associated with non-favored storage elements and designating an order in which the non-favored storage elements are evicted from the cache, and a second LRU list containing entries associated with favored storage elements and designating an order in which the favored storage elements are evicted from the cache. The method periodically scans the first LRU list for non-favored storage elements that have changed to favored storage elements, and the second LRU list for favored storage elements that have changed to non-favored storage elements. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: January 20, 2020
    Date of Patent: May 30, 2023
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kevin J. Ash, Matthew G. Borlick, Beth A. Peterson
  • Patent number: 11636089
    Abstract: A storage control system is configured to obtain first data associated with a logical data device and to store the first data in a first entry of a log-structured array. The storage control system is further configured to invalidate a second entry of the log-structured array based at least in part on the storage of the first data in the first entry. The second entry comprises second data that was associated with the logical data device prior to obtaining the first data. The storage control system is further configured to determine that a first indication in a first metadata indicates that the invalidated second entry corresponds to a transaction log and to defer reclamation of the second entry based at least in part on the determination that the first indication in the first metadata indicates that the invalidated second entry corresponds to the transaction log.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: April 25, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Dan Aharoni, Itay Keller, Sanjay Narahari, Ron Stern
  • Patent number: 11579888
    Abstract: Representative apparatus, method, and system embodiments are disclosed for a self-scheduling processor which also provides additional functionality. Representative embodiments include a self-scheduling processor, comprising: a processor core adapted to execute instructions; and a core control circuit adapted to automatically schedule an instruction for execution by the processor core in response to a received work descriptor data packet. In a representative embodiment, the processor core is further adapted to execute a non-cached load instruction to designate a general purpose register rather than a data cache for storage of data received from a memory circuit. The core control circuit is also adapted to schedule a fiber create instruction for execution by the processor core, and to generate one or more work descriptor data packets to another circuit for execution of corresponding execution threads.
    Type: Grant
    Filed: September 9, 2021
    Date of Patent: February 14, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Tony M. Brewer
  • Patent number: 11513960
    Abstract: A data storage device includes a first memory device; a second memory device including a fetch region configured to store data evicted from the first memory device and a prefetch region divided into a plurality of sections; storage; and a controller configured to control the first memory device, the second memory device, and the storage. The controller may include a memory manager configured to select prefetch data having a set section size from the storage, load the selected prefetch data into the prefetch region and update the prefetch data based on a data read hit ratio of each of the plurality of sections.
    Type: Grant
    Filed: January 13, 2021
    Date of Patent: November 29, 2022
    Assignee: SK hynix Inc.
    Inventor: Da Eun Song
  • Patent number: 11487460
    Abstract: In some embodiments, a storage system comprises at least one processor coupled to memory. The processor is configured to obtain a write operation that comprises first data associated with a logical data device and to store the first data in a first entry of a log-structured array (LSA). The at least one processor is configured to invalidate a second entry based at least in part on the storage of the first data in the first entry. The second entry comprises second data associated with the logical data device that was stored in the second entry prior to obtaining the write operation. The at least one processor is configured to determine that a first indication in LSA metadata associated with the LSA indicates that the invalidated second entry comprises data that is awaiting replication and to defer reclamation of the second entry based at least in part on the determination.
    Type: Grant
    Filed: December 16, 2020
    Date of Patent: November 1, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Itay Keller, Dan Aharoni
  • Patent number: 11449230
    Abstract: An information handling system may have a long short term memory (LSTM) that receives Input/Output (I/O) parameters, and produces a prediction output by operation of a recursive neural network (RNN). An I/O optimizer provides the I/O parameters to the LSTM and receives the prediction output from the LSTM. The I/O optimizer may include a manager module configured to provide control signals to control gates for controlling application of the I/O parameters and the prediction output, and a collector module configured to collect the I/O parameters.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: September 20, 2022
    Assignee: Dell Products L.P.
    Inventors: Chandrashekar Nelogal, Arunava Das Gupta, Niladri Bhattacharya
  • Patent number: 11429538
    Abstract: Provided herein may be a memory system and a method of operating the same. The memory system may include a host configured to generate and output a host command and a host address and to receive and store host map data, a controller configured to store map data, generate an internal command in response to the host command, and map the host address to an internal address based on the map data, and a memory device configured to perform an operation in response to the internal command and the internal address, wherein the controller is configured to load, when the map data corresponding to the host address is not stored in the controller, new map data into a map data storage area storing map data that is identical to the host map data.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: August 30, 2022
    Assignee: SK hynix Inc.
    Inventor: Eu Joon Byun
  • Patent number: 11392499
    Abstract: Various implementations described herein relate to systems and methods for dynamically managing buffers of a storage device, including receiving, by a controller of the storage device from a host, information indicative of a frequency by which data stored in the storage device is accessed, and in response to receiving the information determining, by the controller, the order by which read buffers of the storage device are allocated for a next read command. The NAND read count of virtual Word-Lines (WLs) are also used to cache more frequently accessed WLs, thus proactively reducing read disturb and consequently increasing NAND reliability and NAND life.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: July 19, 2022
    Assignee: KIOXIA CORPORATION
    Inventors: Saswati Das, Manish Kadam, Neil Buxton
  • Patent number: 11392509
    Abstract: Example storage control systems and methods are described. In one implementation, a storage drive controller includes a non-volatile memory subsystem that processes multiple commands. The storage drive controller also includes a controller memory buffer (CMB) memory management unit coupled to the non-volatile memory subsystem. The CMB memory management unit manages CMB-related tasks including caching and storage of data associated with the storage drive controller.
    Type: Grant
    Filed: August 18, 2020
    Date of Patent: July 19, 2022
    Assignee: PETAIO INC.
    Inventors: Changyou Xu, Fan Yang, Peirong Ji, Lingqi Zeng
  • Patent number: 11356491
    Abstract: A content delivery server may provide content to a requesting client device using a streamlined HTTP enhancement proxy delivery technique. For example, an HTTP proxy server may receive a request for video content or a fragment of video content from a client device. The request may be associated with a timeout scheduled to occur if no content has been received after a specified amount of time. The server may then transmit a request for the content to a remote server, such as an upstream cache server in the proxy server's CDN. When the proxy server receives a portion of the requested content from the remote server, the proxy server begins transmitting the portion to the client device before the requested content has been completely received and buffered. The client device may then begin receiving data from the proxy server before timeout has occurred.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: June 7, 2022
    Assignee: Comcast Cable Communications, LLC
    Inventor: Joseph Yongxiang Chen
  • Patent number: 11341055
    Abstract: Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for storage management. According to an example implementation of the present disclosure, a method for storage management includes: determining a state of cached data stored in an initial cache space of a storage system including a plurality of cache disks, the state indicating that a size of the cached data does not match a size of the initial cache space; determining, based on the state, a target cache space of the storage system; and storing at least a part of the cached data into the target cache space to change the size of the initial cache space. Therefore, the management performance can be improved, and the storage costs can be reduced.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: May 24, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Cheng Wang, Bing Liu
  • Patent number: 11243716
    Abstract: A memory system which includes a memory pool having a plurality of memory units and a controller suitable for controlling the plurality of memory units, wherein the controller includes a translation unit suitable for translating a system address into a local address within the memory pool, a threshold decision unit suitable for dynamically changing a threshold based on an a number of accesses to each local address for data within the memory pool, a data attribute determination unit suitable for determining an attribute of data associated with the translated local address based on the threshold and the number of accesses to the translated local address, and a data input/output unit suitable for controlling a memory unit associated with a new local address among the plurality memory units based on the attribute of the data.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: February 8, 2022
    Assignee: SK hynix Inc.
    Inventor: Chang-Min Kwak
  • Patent number: 11232025
    Abstract: Storage management is enabled. An example method comprises: receiving an update request for a target object stored in a first storage block to write the updated target object into a second storage block different from the first storage block; determining a candidate object associated with the target object using a search tree, the search tree indicating a hierarchical relation among a plurality of objects, wherein a first node corresponding to the target object and a second node corresponding to the candidate object share a same index node in the search tree; determining whether the candidate object was updated during a past predetermined time period; and in response to the candidate object not being updated during the past predetermined time period, moving the candidate object from a third storage block into a fourth storage block different from the third storage block.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: January 25, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Lu Lei, Young Yangchun Wu
  • Patent number: 11221965
    Abstract: In a cache memory used for communication between a host and a memory, the cache memory may include a plurality of cache sets, each comprising: a valid bit; N dirty bits; a tag; and N data sets respectively corresponding to the N dirty bits and each including data of a data chunk size substantially identical to a data chunk size of the host, wherein a data chunk size of the memory is N times as large as the data chunk size of the host, where N is an integer greater than or equal to 2.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: January 11, 2022
    Assignee: SK hynix Inc.
    Inventors: Seung-Gyu Jeong, Dong-Gun Kim, Jung-Hyun Kwon, Young-Suk Moon
  • Patent number: 11144639
    Abstract: Provided are a computer program product, system, and method for determining whether to destage write data in cache to storage based on whether the write data has malicious data. Write data for a storage is cached in a cache. A determination is made as to whether the write data in the cache comprises random data according to a randomness criteria. The write data in the cache to the storage in response to determining that the write data does not comprise random data according to the randomness criteria. The write data is processed as malicious data after determining that the write data comprises random data according to the randomness criteria.
    Type: Grant
    Filed: March 4, 2019
    Date of Patent: October 12, 2021
    Assignee: International Business Machines Corporation
    Inventors: Matthew G. Borlick, Lokesh M. Gupta, Carol S. Mellgren, John G. Thompson
  • Patent number: 11144475
    Abstract: A computer program product, system, and method for managing adding of accessed tracks in cache to a most recently used end of a cache list. A cache list for the cache has a least recently used (LRU) end and a most recently used (MRU) end. Tracks in the cache are indicated in the cache list. A track in the cache indicated on the cache list is accessed. A determination is made as to whether a track cache residency time since the accessed track was last accessed while in the cache list is within a region of lowest track cache residency times. A flag is set for the accessed track indicating to indicate the track at the MRU end in response to determining that the track cache residency time of the accessed track is within the region of lowest track cache residency times. The accessed track remains at a current position in the cache list before being accessed after setting the flag.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: October 12, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew J. Kalos
  • Patent number: 11138118
    Abstract: The sizes of cache partitions, in a partitioned cache, are dynamically adjusted by determining, for each request, how many cache misses will occur in connection with implementing the request against the cache partition. The cache partition associated with the current request is increased in size by the number of cache misses and one or more other cache partitions is decreased in size causing cache evictions to occur from the other cache partitions rather than from the current cache partition. The other cache partitions, that are to be decreased in size, may be determined by ranking the cache partitions according to frequency of use and selecting the least frequently used cache partition to be reduced in size.
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: October 5, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Hugo de Oliveira Barbalho, Jonas Furtado Dias, Vinícius Michel Gottin
  • Patent number: 11119921
    Abstract: State machine generation for a multi-buffer electronic system can include receiving, using a processor, a user input specifying a reader policy and a number of a plurality of buffers used by a reader and a writer of the multi-buffer electronic system. A state machine can be generated as a data structure. The state machine has a plurality of states determined based on the number of the plurality of buffers and the reader policy. The state machine allocates different buffers of the plurality of buffers to the reader in temporally accurate order over time. Each state can specify an allocation from the plurality of buffers to the reader and the writer. A state machine description including one or more program code components can be generated, where the one or more program components may be used in an implementation of the reader and an implementation of the writer.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: September 14, 2021
    Assignee: Xilinx, Inc.
    Inventor: Uday M. Hegde
  • Patent number: 11119679
    Abstract: Data blocks of a memory sub-system that have been accessed by a host system can be determined. An access pattern associated with the data blocks by the host system can be determined. A spatial characteristic for each respective pair of the data blocks of the memory sub-system can be received. A data graph can be generated with nodes that are based on the access pattern associated with the data blocks of the memory sub-system and edge values between the nodes that are based on the spatial characteristic for each respective pair of the data blocks of the memory sub-system.
    Type: Grant
    Filed: August 2, 2019
    Date of Patent: September 14, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Anirban Ray, Samir Mittal, Gurpreet Anand
  • Patent number: 11070650
    Abstract: A method, computer program product, and a computing system are provided for de-duplicating remote procedure calls at a client. In an implementation, the method may include generating a plurality of local pending remote procedure calls. The method may also include identifying a set of duplicate remote procedure calls among the plurality of remote procedure calls. The method may also include associating each remote procedure call within the set of duplicate remote procedure calls with one another. The method may also include executing a remote procedure call of the set of duplicate remote procedure calls. The method may further include providing a response for the remote procedure call of the set of duplicate remote procedure calls with the other remote procedure calls of the set of duplicate remote procedure calls.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: July 20, 2021
    Assignee: International Business Machines Corporation
    Inventors: John T. Kohl, Shailaja S. Golikeri
  • Patent number: 11049570
    Abstract: A method for dynamically altering a writes-per-day classification of multiple storage drives is disclosed. In one embodiment, such a method monitors, within a storage environment, an amount of overprovisioning utilized by multiple storage drives. Each storage drive has a writes-per-day classification associated therewith. Based on the amount of overprovisioning, the method periodically modifies the writes-per-day classification of the storage drives. The method then reorganizes the storage drives within various storage groups (e.g., RAID arrays, storage tiers, workloads, etc.) based on their writes-per-day classification. For example, the method may place, as much as possible, storage drives of the same writes-per-day classification within the same storage groups. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: June 29, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Matthew G. Borlick, Karl A. Nielsen, Micah Robison
  • Patent number: 11048667
    Abstract: A method for improving asynchronous data replication between a primary storage system and a secondary storage system is disclosed. In one embodiment, such a method includes monitoring, in a cache of the primary storage system, unmirrored data elements needing to be mirrored, but that have not yet been mirrored, from the primary storage system to the secondary storage system. The method maintains an LRU list designating an order in which data elements are demoted from the cache. The method determines whether a data element at an LRU end of the LRU list is an unmirrored data element. In the event the data element at the LRU end of the LRU list is an unmirrored data element, the method moves the data element to an MRU end of the LRU list. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: June 29, 2021
    Assignees: International Business, Machines Corporation
    Inventors: Gail Spear, Lokesh M. Gupta, Kevin J. Ash, David B. Schreiber, Kyler A. Anderson
  • Patent number: 11030089
    Abstract: A portion of a logical block address to physical block address (“L2P”) translation map may be identified. A last snapshot of the portion of the L2P translation map may be identified. One or more write operations may be determined, where the write operations are associated with logical block addresses of the portion of the L2P translation map. The write operations may have been performed after the last snapshot of the portion of the L2P translation map was stored. An address on the portion of the L2P translation map may be updated by a processing device based on the determined one or more write operations and the last snapshot of the portion of the L2P translation map.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: June 8, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Daniel A. Boals, Byron D. Harris, Karl D. Schuh, Amy L. Wohlschlegel
  • Patent number: 10990541
    Abstract: A controller controls an operation of a semiconductor memory device. The controller includes a cache buffer, a request analyzer, and a cache controller. The cache buffer stores multiple cache data. The request analyzer generates request information including information on a size of read data to be read. The cache controller determines an eviction policy of the multiple cache data, based on the size of the read data in the request information.
    Type: Grant
    Filed: July 25, 2019
    Date of Patent: April 27, 2021
    Assignee: SK hynix Inc.
    Inventor: Eu Joon Byun
  • Patent number: 10949109
    Abstract: An expansion cartridge (200) and a method for deduplicating the data chunks stored at a client device (100) using the expansion cartridge (200), (300) are claimed herein. As per the invention, the expansion cartridge (200) is attachable, externally, to client devices (100) carrying the electronic data files to be transferred, wherein the expansion cartridge (200) is characterized by a file management component (220), a chunk management component (240), a storage component (260), and a mirroring component (280), and wherein, the expansion cartridge (200) on being attached with the client devices (100) interfaces with a client side data historian (125) and a client side processor (150) in the client device (100) using interfacing options, including without limitation, Small Computer System Interfaces (SCSI), Fibre Channel (FC) Interface, Ethernet Interface, Advanced Technology Attachment (ATA) Interface or a combination thereof.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: March 16, 2021
    Assignee: ARC Document Solutions, LLC
    Inventors: Rahul Roy, Srinivasa Rao Mukkamala, Himadri Majumder, Dipali Bhattacharya
  • Patent number: 10901916
    Abstract: Provided are a computer program product, system, and method for managing adding of accessed tracks to a cache list based on accesses to different regions of the cache list. A cache has a least recently used (LRU) end and a most recently used (MRU) end. A determination is made of a high access region of tracks from the MRU end of the cache list based on a number of accesses to the tracks in the high access region. A flag is set for an accessed track, indicating to indicate the accessed track at the MRU end upon processing the accessed track at the LRU end, in response to the determining the accessed track is in the high access region. After the setting the flag, the accessed track remains at a current position in the cache list before being accessed.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew J. Kalos
  • Patent number: 10891167
    Abstract: A method of protecting software in a computer system includes defining a memory fractionation configuration for an application software program in the computer system, fractionating at least one page of the application software program into fractions according to the memory fractionation configuration, and running the application in such a manner that, at any particular point in time when the application is running, at least a first one of the fractions is stored in a manner that is not accessible from a user space or a kernel space of the computer system.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: January 12, 2021
    Assignee: Siege Technologies, LLC
    Inventor: Joseph James Sharkey
  • Patent number: 10872038
    Abstract: A system comprises a memory, a plurality of memory banks, and an organizer. The memory is configured to store elements of a matrix, wherein the elements are distributed into overlapping subgroups and each shares at least one element of the matrix with another overlapping subgroup. The plurality of memory banks is configured to store the overlapping subgroups, wherein the subgroups are distributed among the memory banks using a circular shifted pattern. The organizer is configured to read specific ones of the overlapping subgroups in the plurality of memory banks in a specified pattern associated with transposing the matrix.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: December 22, 2020
    Assignee: Facebook, Inc.
    Inventors: Krishnakumar Narayanan Nair, Ehsan Khish Ardestani Zadeh, Olivia Wu, Yuchen Hao
  • Patent number: 10838870
    Abstract: The described technology is generally directed towards caching and aggregated write operations based on predicted patterns of data transfer operations. According to an embodiment, a system can comprise a memory that can store computer executable components, and a processor that can execute the computer executable components stored in the memory. The components can comprise a pattern identifying component to identify a first pattern of data transfer operations performed on a data store, resulting in an identified first pattern, based on monitored data transfer operations. The components can further comprise a pattern predicting component to predict a second pattern of future data transfer operations performed on the data store, resulting in a predicted second pattern, based on the identified first pattern. The components can further comprise a host adapter to generate a data transfer operation to be performed on the data store based on the predicting the second pattern.
    Type: Grant
    Filed: April 17, 2019
    Date of Patent: November 17, 2020
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Malak Alshawabkeh, Steven John Ivester, Ramesh Doddaiah, Kaustubh S. Sahasrabudhe
  • Patent number: 10802895
    Abstract: A method and an apparatus for determining a usage level of a memory device to notify a running application to perform memory reduction operations selected based on the memory usage level are described. An application calls APIs (Application Programming Interface) integrated with the application codes in the system to perform memory reduction operations. A memory usage level is determined according to a memory usage status received from the kernel of a system. A running application is associated with application priorities ranking multiple running applications statically or dynamically. Selecting memory reduction operations and notifying a running application are based on application priorities. Alternatively, a running application may determine a mode of operation to directly reduce memory usage in response to a notification for reducing memory usage without using API calls to other software.
    Type: Grant
    Filed: October 16, 2018
    Date of Patent: October 13, 2020
    Assignee: Apple Inc.
    Inventors: Matthew G. Watson, James Michael Magee
  • Patent number: 10776275
    Abstract: A method comprises a cache manager receiving reference attributes associated with network data and selecting a replacement data location of a cache to store cache-line data associated with the network data. The replacement data location is selected based on the reference attributes and an order of reference states stored in a replacement stack of the cache. The stored reference states are associated with respective cached-data stored in the cache and based on reference attributes associated with respective cached-data. The reference states are stored in the replacement stack based on a set of the reference attributes and the stored reference states. In response to receiving reference attributes, the cache manager can modify a stored reference state, determine a second order of the state locations, and store a reference state in the replacement stack based on the second order. A system can comprise a network computing element having a cache, a cache manager, and a replacement stack.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: September 15, 2020
    Assignee: International Business Machines Corporation
    Inventors: Brian W. Thompto, Bernard C. Drerup, Mohit S. Karve
  • Patent number: 10761988
    Abstract: Aspects of the present disclosure relate to an apparatus comprising a data array having locality-dependent latency characteristics such that an access to an open unit of the data array has a lower latency than an access to a closed unit of the data array. Set associative cache indexing circuitry determines, in response to a request for data associated with a target address, a cache set index. Mapping circuitry identifies, in response to the index, a set of data array locations corresponding to the index, according to a mapping in which a given unit of the data array comprises locations corresponding to a plurality of consecutive indices, and at least two locations of the set of locations corresponding to the same index are in different units of the data array. Cache access circuitry accesses said data from one of the set of data array locations.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: September 1, 2020
    Assignee: Arm Limited
    Inventors: Radhika Sanjeev Jagtap, Nikos Nikoleris, Andreas Lars Sandberg, Stephan Diestelhorst
  • Patent number: 10691607
    Abstract: Disclosed are a method and a device for increasing the performance of processes through cache splitting in a computing device using a plurality of cores. According to the present invention, a cache splitting method for performing a cache splitting in a computing device comprises the steps of: identifying, among a plurality of processes being executed, a process generating a cache flooding; and controlling the process generating the cache flooding such that the process uses a cache of a limited size.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: June 23, 2020
    Assignees: Samsung Electronics Co., Ltd., Seoul National University R&DB Foundation
    Inventors: Jinkyu Koo, Hyeonsang Eom, Myung Sun Kim, Hanul Sung
  • Patent number: 10684946
    Abstract: A method may include: partitioning data on an on-chip and/or an off-chip storage medium into different data blocks according to a pre-determined data partitioning principle, wherein data with a reuse distance less than a pre-determined distance threshold value is partitioned into the same data block; and a data indexing step for successively loading different data blocks to at least one on-chip processing unit according a pre-determined ordinal relation of a replacement policy, wherein the repeated data in a loaded data block being subjected to on-chip repetitive addressing. Data with a reuse distance less than a pre-determined distance threshold value is partitioned into the same data block, and the data partitioned into the same data block can be loaded on a chip once for storage, and is then used as many times as possible, so that the access is more efficient.
    Type: Grant
    Filed: August 9, 2016
    Date of Patent: June 16, 2020
    Assignee: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCE
    Inventors: Qi Guo, Tianshi Chen, Yunji Chen
  • Patent number: 10664393
    Abstract: A memory includes a plurality of pages used as a cache area. A processor allocates a first storage resource to a first link indicating a first page group of the plurality of pages, and a second storage resource to a second link indicating a second page group of the plurality of pages. The processor uses the first and the second links, and processes access requests for accessing to the first and the second storage resources, in parallel.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: May 26, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Takuro Kumabe, Motohiro Sakai, Keima Abe
  • Patent number: 10656945
    Abstract: Executing a Next Instruction Access Intent instruction by a computer. The processor obtains an access intent instruction indicating an access intent. The access intent is associated with an operand of a next sequential instruction. The access intent indicates usage of the operand by one or more instructions subsequent to the next sequential instruction. The computer executes the access intent instruction. The computer obtains the next sequential instruction. The computer executes the next sequential instruction, whose execution comprises, based on the access intent, adjusting one or more cache behaviors for the operand of the next sequential instruction.
    Type: Grant
    Filed: June 15, 2012
    Date of Patent: May 19, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Christian Jacobi, Chung-Lung Kevin Chum, Timothy J. Slegel, Gustav E. Sittmann, III, Cynthia Sittmann
  • Patent number: 10649901
    Abstract: A set-associative cache memory includes a plurality of ways and a plurality of congruence classes. Each of the plurality of congruence classes includes a plurality of members each belonging to a respective one of the plurality of ways. In the cache memory, a data structure records a history of an immediately previous N ways from which cache lines have been evicted. In response to receipt of a memory access request specifying a target address, a selected congruence class among a plurality of congruence classes is selected based on the target address. At least one member of the selected congruence class is removed as a candidate for selection for victimization based on the history recorded in the data structure, and a member from among the remaining members of the selected congruence class is selected. The cache memory then evicts the victim cache line cached in the selected member of the selected congruence class.
    Type: Grant
    Filed: August 3, 2017
    Date of Patent: May 12, 2020
    Assignee: International Business Machines Corporation
    Inventors: Bernard Drerup, Guy L. Guthrie, Jeffrey Stuecheli, Phillip Williams
  • Patent number: 10630530
    Abstract: Embodiments of the present application provide a cache method, the cache method includes: receiving, from the cache core server, information about a Transmission Control Protocol (TCP) flow; determining, according to the information, whether the cache edge server stores content corresponding to the information; sending a migrate-out request to the cache core server based on that the cache edge server stores the content corresponding to the information; receiving a migrate-out response from the cache core server upon the sending of the migrate-out request; performing a TCP connection to user equipment according to the migrate-out response; and reading content corresponding to the connection from storage of the cache edge server according to a byte quantity, sent by the cache core server, of the content, and sending the content to the user equipment.
    Type: Grant
    Filed: June 25, 2017
    Date of Patent: April 21, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Zanfeng Yang, Tao Song, Xinyu Ge, Pinhua Zhao, Jianzhong Yu, Bo Zhou, Wentao Wang
  • Patent number: 10621104
    Abstract: Examples herein involve a variable cache. An example variable cache controller obtains cache lines corresponding to accesses of a non-volatile memory of a system, monitors access history of the non-volatile memory, determines a number of distinct objects accessed in the access history during a time period from the object information, and sets a size of a variable cache of the system based on the number of distinct objects accessed in the access history during the time period.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: April 14, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Pengcheng Li, Dhruva Chakrabarti
  • Patent number: 10621083
    Abstract: A storage system selects from a plurality of physical areas constituting a physical address space as copy source physical areas, one or more non-additionally recordable physical areas each including a fragmented free area, and also selects a recordable physical area as a copy destination physical area. The storage system then writes one or more pieces of live data from the selected one or more copy source physical areas to the free area of the selected copy destination physical area on a per-strip or per-stripe basis, sequentially from the beginning of the free area. If the size of the write target data is such that it is not possible to write the write target data to the free area on a per-strip or per-stripe basis, then the storage system pads the write target data, and writes the padded write target data to the free area on a per-strip or per-stripe basis.
    Type: Grant
    Filed: May 12, 2015
    Date of Patent: April 14, 2020
    Assignee: Hitachi, Ltd.
    Inventors: Hiroshi Nakagoe, Akira Yamamoto, Yoshihiro Yoshii
  • Patent number: 10599661
    Abstract: A method includes receiving a first signal and updating a bitmap index responsive to the first signal. The bitmap index includes a plurality of bit strings, where a value stored in a particular location in each of the bit strings indicates whether a corresponding signal associated with a signal source has been received. Updating the bitmap index responsive to the first signal includes updating a first bit of the bitmap index and updating first metadata values stored in the bitmap index, wherein the first metadata values comprise a plurality of sort index values indicating relative ranks of the first bit string relative to other bit strings. The method also includes outputting query results based on a query, wherein the query results identify one or more signals associated with one or more bit strings of the plurality of bit strings and one or more signal sources of a plurality of signal sources, and wherein the query results are sorted according to one of the first metadata values.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: March 24, 2020
    Assignee: Molecula Corp.
    Inventors: Travis Turner, Todd Wesley Gruben, Ben Johnson, Cody Stephen Soyland, Higinio O. Maycotte
  • Patent number: 10547876
    Abstract: A video cache rule generation system for generating cache rules for caching and playing back a video includes an obtaining module, a determining module, an extracting module, a comparing module, and a generating module. The obtaining module obtains URL addresses. The determining module determines whether each URL address belongs to a video URL address according to a media tag library. The extracting module divides a first URL address into chunks and extracts first key chunks from the chunks after division. The comparing module compares each first key chunk with a key chunk subclass, and marks a second key chunk where the second key chunk is found to be different from the key chunk subclass. The generating module generates a cache rule through generating a list of marked key parameter chunks. A video cache rule generation method is also provided.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: January 28, 2020
    Assignee: NANNING FUGUI PRECISION INDUSTRIAL CO., LTD.
    Inventor: Chi-Feng Lee
  • Patent number: 10489148
    Abstract: According to an exemplary embodiment of the present disclosure, disclosed is a method for seamless application version management in a system including a plurality of application servers.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: November 26, 2019
    Assignee: TMAXSOFT CO., LTD.
    Inventors: Junsoo Jeong, Yujeong Ha, Chanpyo Hong
  • Patent number: 10440142
    Abstract: Among other things, this document describes systems, devices, and methods for improving cache efficiency by automatically discovering and updating time to live (TTL) settings for cached content. TTL values define how long content may be served from a cache before the cache should return to origin to verify the freshness of the content. TTL values may be set by an origin server, using an appropriate HTTP header for example, or by manual configuration action, or otherwise. A cache may adjust this TTL value—or generate a TTL value if none is provided, based at least in part on cache performance characteristics and targets, along with an analysis of the history of purge events.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: October 8, 2019
    Assignee: Akamai Technologies, Inc.
    Inventors: Alexandre Menai, Charles Patrick Dublin