Least Recently Used Patents (Class 711/136)
-
Patent number: 12159221Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a memory-based prediction system configured to receive an input observation characterizing a state of an environment interacted with by an agent and to process the input observation and data read from a memory to update data stored in the memory and to generate a latent representation of the state of the environment. The method comprises: for each of a plurality of time steps: processing an observation for the time step and data read from the memory to: (i) update the data stored in the memory, and (ii) generate a latent representation of the current state of the environment as of the time step; and generating a predicted return that will be received by the agent as a result of interactions with the environment after the observation for the time step is received.Type: GrantFiled: March 11, 2019Date of Patent: December 3, 2024Assignee: DeepMind Technologies LimitedInventors: Gregory Duncan Wayne, Chia-Chun Hung, David Antony Amos, Mehdi Mirza Mohammadi, Arun Ahuja, Timothy Paul Lillicrap
-
Patent number: 12130739Abstract: Systems, methods, and apparatuses relating to circuitry to implement dynamic bypassing of last level cache are described. In one embodiment, a hardware processor includes a cache to store a plurality of cache lines of data, a processing element to generate a memory request and mark the memory request with a reuse hint value, and a cache controller circuit to mark a corresponding cache line in the cache as more recently used when the memory request is a read request that is a hit in the cache and the reuse hint value is a first value, and mark the corresponding cache line in the cache as less recently used when the memory request is the read request that is the hit in the cache and the reuse hint value is a second, different value.Type: GrantFiled: March 27, 2020Date of Patent: October 29, 2024Assignee: Intel CorporationInventors: Ayan Mandal, Neetu Jindal, Leon Polishuk, Yossi Grotas, Aravindh Anantaraman
-
Patent number: 12073090Abstract: A system comprising a row hammer mitigation circuitry and a cache memory that collaborate to mitigate row hammer attacks on a memory media device is described. The cache memory biases cache policy based on row access count information maintained by the row hammer mitigation circuit. The row hammer mitigation circuitry may be implemented in a memory controller. The memory media device may be DRAM. Corresponding methods are also described.Type: GrantFiled: September 9, 2022Date of Patent: August 27, 2024Assignee: Micron Technology, Inc.Inventors: Edmund Gieske, Cagdas Dirik
-
Patent number: 11994990Abstract: A cache memory having a memory media device row activation-biased caching policy is described. The cache policies that are biased based on row activation counts include at least one of a cache line eviction policy which determines which cache lines are the most evictable from the cache memory, and cache line storage policy which determined which row data is allocated cache lines for storage. A memory controller including a row activation-biased cache memory is also described. The memory media device may be DRAM.Type: GrantFiled: September 9, 2022Date of Patent: May 28, 2024Assignee: Micron Technology, Inc.Inventors: Edmund Gieske, Cagdas Dirik
-
Patent number: 11983153Abstract: Some implementations of the disclosed systems, apparatus, methods and computer program products may provide for determination of resource usage by tenants in a multi-tenant server system. Tenants may provide resource requests to a database of the multi-tenant server system and such resource requests may include context data. Periodic snapshots of the database may be performed to determine the pending resource requests received by the various tenants and, based on the snapshots and the context data, the resource usage of the various tenants, as well as the system as a whole, may be determined and forecasted for the future.Type: GrantFiled: December 22, 2021Date of Patent: May 14, 2024Assignee: Salesforce, Inc.Inventors: Pratheesh Ezhapilly Chennen, Prakash Ramaswamy
-
Patent number: 11853219Abstract: A storage controller includes a prefetch buffer configured to buffer data prefetched from a non-volatile memory during a prefetch operation, a determiner circuit configured to output one of the prefetched data and normal data read from the non-volatile memory, as read data, and a prefetch control circuit configured to enable the prefetch operation during a first time when a sequential read operation is performed on the non-volatile memory, disable the prefetch operation at a second point after the first time, and enable the prefetch operation or maintain the disable of the prefetch operation according to performance of the read data in a prefetch suspend period after the second time in which the prefetch operation is disabled.Type: GrantFiled: October 7, 2021Date of Patent: December 26, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Minwoo Kim, Daekyu Park
-
Patent number: 11856031Abstract: A method for processing network communications, the method including receiving a network packet at a network device and performing at least one lookup for the packet in one or more first lookup tables in which the one or more first lookup tables are programmed to include at least one of an exact match or longest prefix match (LPM) table entry. The method includes obtaining a security source segment and a security destination segment based upon the result of the at least one lookup for the packet in the one or more first lookup tables. The method further includes performing a lookup in a second lookup table based upon the security source segment and security destination segment in which the second lookup table is programmed in a content addressable memory. Based upon the result of the lookup in the second lookup table, processing a forwarding decision for the packet according to the security source segment and security destination segment.Type: GrantFiled: November 8, 2022Date of Patent: December 26, 2023Assignee: Arista Networks, Inc.Inventor: Adam James Sweeney
-
Patent number: 11797435Abstract: A zone is loaded onto a first memory component of a storage system, wherein the zone comprises one or more regions of data blocks comprising a first plurality of logical block addresses (LBAs), and a snapshot of each of the one or more regions is stored on a second memory component of the storage system and is associated with a version identifier. A particular version identifier associated with a respective snapshot of a region is identified, and a set of journals stored on the second memory component are identified, wherein the set of journals comprise a second plurality of LBAs mapped to a second plurality of physical block addresses.Type: GrantFiled: June 7, 2021Date of Patent: October 24, 2023Assignee: Micron Technology, Inc.Inventors: Daniel A. Boals, Byron D. Harris, Karl D. Schuh, Amy L. Wohlschlegel
-
Patent number: 11768779Abstract: Systems, apparatuses, and methods for cache management based on access type priority are disclosed. A system includes at least a processor and a cache. During a program execution phase, certain access types are more likely to cause demand hits in the cache than others. Demand hits are load and store hits to the cache. A run-time profiling mechanism is employed to find which access types are more likely to cause demand hits. Based on the profiling results, the cache lines that will likely be accessed in the future are retained based on their most recent access type. The goal is to increase demand hits and thereby improve system performance. An efficient cache replacement policy can potentially reduce redundant data movement, thereby improving system performance and reducing energy consumption.Type: GrantFiled: December 16, 2019Date of Patent: September 26, 2023Assignee: Advanced Micro Devices, Inc.Inventors: Jieming Yin, Yasuko Eckert, Subhash Sethumurugan
-
Patent number: 11762765Abstract: A zone is loaded onto a first memory component of a storage system, wherein the zone comprises one or more regions of data blocks comprising a first plurality of logical block addresses (LBAs), and a snapshot of each of the one or more regions is stored on a second memory component of the storage system and is associated with a version identifier. A particular version identifier associated with a respective snapshot of a region is identified, and a set of journals stored on the second memory component are identified, wherein the set of journals comprise a second plurality of LBAs mapped to a second plurality of physical block addresses.Type: GrantFiled: June 7, 2021Date of Patent: September 19, 2023Assignee: Micron Technology, Inc.Inventors: Daniel A. Boals, Byron D. Harris, Karl D. Schuh, Amy L. Wohlschlegel
-
Patent number: 11714725Abstract: A device comprising a memory controller coupled to a non-volatile memory (NVM) device with a shadow tracker memory region. The controller comprises a low-overhead and low recovery time for integrity-protected systems by recovering a secure metadata cache. The controller is configured to persistently track addresses of blocks in the secure metadata cache in the NVM device when a miss occurs, and track the persistent addresses, after the miss. The controller is configured to rebuild affected parts of the secure metadata cache associated with the persistent addresses in the NVM device. A system is provided which includes the memory controller interfaced with an NVM device with the shadow tracker memory region.Type: GrantFiled: June 3, 2020Date of Patent: August 1, 2023Assignee: UNIVERSITY OF CENTRAL FLORIDA RESEARCH FOUNDATION, INC.Inventors: Kazi Abu Zubair, Amro Awad
-
Patent number: 11663144Abstract: A method for improving cache hit ratios for selected storage elements within a storage system includes storing, in a cache of a storage system, non-favored storage elements and favored storage elements. The favored storage elements are retained in the cache longer than the non-favored storage elements. The method maintains a first LRU list containing entries associated with non-favored storage elements and designating an order in which the non-favored storage elements are evicted from the cache, and a second LRU list containing entries associated with favored storage elements and designating an order in which the favored storage elements are evicted from the cache. The method periodically scans the first LRU list for non-favored storage elements that have changed to favored storage elements, and the second LRU list for favored storage elements that have changed to non-favored storage elements. A corresponding system and computer program product are also disclosed.Type: GrantFiled: January 20, 2020Date of Patent: May 30, 2023Assignee: International Business Machines CorporationInventors: Lokesh M. Gupta, Kevin J. Ash, Matthew G. Borlick, Beth A. Peterson
-
Patent number: 11636089Abstract: A storage control system is configured to obtain first data associated with a logical data device and to store the first data in a first entry of a log-structured array. The storage control system is further configured to invalidate a second entry of the log-structured array based at least in part on the storage of the first data in the first entry. The second entry comprises second data that was associated with the logical data device prior to obtaining the first data. The storage control system is further configured to determine that a first indication in a first metadata indicates that the invalidated second entry corresponds to a transaction log and to defer reclamation of the second entry based at least in part on the determination that the first indication in the first metadata indicates that the invalidated second entry corresponds to the transaction log.Type: GrantFiled: August 3, 2020Date of Patent: April 25, 2023Assignee: EMC IP Holding Company LLCInventors: Dan Aharoni, Itay Keller, Sanjay Narahari, Ron Stern
-
Patent number: 11579888Abstract: Representative apparatus, method, and system embodiments are disclosed for a self-scheduling processor which also provides additional functionality. Representative embodiments include a self-scheduling processor, comprising: a processor core adapted to execute instructions; and a core control circuit adapted to automatically schedule an instruction for execution by the processor core in response to a received work descriptor data packet. In a representative embodiment, the processor core is further adapted to execute a non-cached load instruction to designate a general purpose register rather than a data cache for storage of data received from a memory circuit. The core control circuit is also adapted to schedule a fiber create instruction for execution by the processor core, and to generate one or more work descriptor data packets to another circuit for execution of corresponding execution threads.Type: GrantFiled: September 9, 2021Date of Patent: February 14, 2023Assignee: Micron Technology, Inc.Inventor: Tony M. Brewer
-
Patent number: 11513960Abstract: A data storage device includes a first memory device; a second memory device including a fetch region configured to store data evicted from the first memory device and a prefetch region divided into a plurality of sections; storage; and a controller configured to control the first memory device, the second memory device, and the storage. The controller may include a memory manager configured to select prefetch data having a set section size from the storage, load the selected prefetch data into the prefetch region and update the prefetch data based on a data read hit ratio of each of the plurality of sections.Type: GrantFiled: January 13, 2021Date of Patent: November 29, 2022Assignee: SK hynix Inc.Inventor: Da Eun Song
-
Patent number: 11487460Abstract: In some embodiments, a storage system comprises at least one processor coupled to memory. The processor is configured to obtain a write operation that comprises first data associated with a logical data device and to store the first data in a first entry of a log-structured array (LSA). The at least one processor is configured to invalidate a second entry based at least in part on the storage of the first data in the first entry. The second entry comprises second data associated with the logical data device that was stored in the second entry prior to obtaining the write operation. The at least one processor is configured to determine that a first indication in LSA metadata associated with the LSA indicates that the invalidated second entry comprises data that is awaiting replication and to defer reclamation of the second entry based at least in part on the determination.Type: GrantFiled: December 16, 2020Date of Patent: November 1, 2022Assignee: EMC IP Holding Company LLCInventors: Itay Keller, Dan Aharoni
-
Patent number: 11449230Abstract: An information handling system may have a long short term memory (LSTM) that receives Input/Output (I/O) parameters, and produces a prediction output by operation of a recursive neural network (RNN). An I/O optimizer provides the I/O parameters to the LSTM and receives the prediction output from the LSTM. The I/O optimizer may include a manager module configured to provide control signals to control gates for controlling application of the I/O parameters and the prediction output, and a collector module configured to collect the I/O parameters.Type: GrantFiled: March 7, 2019Date of Patent: September 20, 2022Assignee: Dell Products L.P.Inventors: Chandrashekar Nelogal, Arunava Das Gupta, Niladri Bhattacharya
-
Patent number: 11429538Abstract: Provided herein may be a memory system and a method of operating the same. The memory system may include a host configured to generate and output a host command and a host address and to receive and store host map data, a controller configured to store map data, generate an internal command in response to the host command, and map the host address to an internal address based on the map data, and a memory device configured to perform an operation in response to the internal command and the internal address, wherein the controller is configured to load, when the map data corresponding to the host address is not stored in the controller, new map data into a map data storage area storing map data that is identical to the host map data.Type: GrantFiled: June 28, 2019Date of Patent: August 30, 2022Assignee: SK hynix Inc.Inventor: Eu Joon Byun
-
Patent number: 11392499Abstract: Various implementations described herein relate to systems and methods for dynamically managing buffers of a storage device, including receiving, by a controller of the storage device from a host, information indicative of a frequency by which data stored in the storage device is accessed, and in response to receiving the information determining, by the controller, the order by which read buffers of the storage device are allocated for a next read command. The NAND read count of virtual Word-Lines (WLs) are also used to cache more frequently accessed WLs, thus proactively reducing read disturb and consequently increasing NAND reliability and NAND life.Type: GrantFiled: September 18, 2020Date of Patent: July 19, 2022Assignee: KIOXIA CORPORATIONInventors: Saswati Das, Manish Kadam, Neil Buxton
-
Patent number: 11392509Abstract: Example storage control systems and methods are described. In one implementation, a storage drive controller includes a non-volatile memory subsystem that processes multiple commands. The storage drive controller also includes a controller memory buffer (CMB) memory management unit coupled to the non-volatile memory subsystem. The CMB memory management unit manages CMB-related tasks including caching and storage of data associated with the storage drive controller.Type: GrantFiled: August 18, 2020Date of Patent: July 19, 2022Assignee: PETAIO INC.Inventors: Changyou Xu, Fan Yang, Peirong Ji, Lingqi Zeng
-
Patent number: 11356491Abstract: A content delivery server may provide content to a requesting client device using a streamlined HTTP enhancement proxy delivery technique. For example, an HTTP proxy server may receive a request for video content or a fragment of video content from a client device. The request may be associated with a timeout scheduled to occur if no content has been received after a specified amount of time. The server may then transmit a request for the content to a remote server, such as an upstream cache server in the proxy server's CDN. When the proxy server receives a portion of the requested content from the remote server, the proxy server begins transmitting the portion to the client device before the requested content has been completely received and buffered. The client device may then begin receiving data from the proxy server before timeout has occurred.Type: GrantFiled: December 27, 2018Date of Patent: June 7, 2022Assignee: Comcast Cable Communications, LLCInventor: Joseph Yongxiang Chen
-
Patent number: 11341055Abstract: Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for storage management. According to an example implementation of the present disclosure, a method for storage management includes: determining a state of cached data stored in an initial cache space of a storage system including a plurality of cache disks, the state indicating that a size of the cached data does not match a size of the initial cache space; determining, based on the state, a target cache space of the storage system; and storing at least a part of the cached data into the target cache space to change the size of the initial cache space. Therefore, the management performance can be improved, and the storage costs can be reduced.Type: GrantFiled: June 29, 2020Date of Patent: May 24, 2022Assignee: EMC IP HOLDING COMPANY LLCInventors: Cheng Wang, Bing Liu
-
Patent number: 11243716Abstract: A memory system which includes a memory pool having a plurality of memory units and a controller suitable for controlling the plurality of memory units, wherein the controller includes a translation unit suitable for translating a system address into a local address within the memory pool, a threshold decision unit suitable for dynamically changing a threshold based on an a number of accesses to each local address for data within the memory pool, a data attribute determination unit suitable for determining an attribute of data associated with the translated local address based on the threshold and the number of accesses to the translated local address, and a data input/output unit suitable for controlling a memory unit associated with a new local address among the plurality memory units based on the attribute of the data.Type: GrantFiled: September 16, 2019Date of Patent: February 8, 2022Assignee: SK hynix Inc.Inventor: Chang-Min Kwak
-
Patent number: 11232025Abstract: Storage management is enabled. An example method comprises: receiving an update request for a target object stored in a first storage block to write the updated target object into a second storage block different from the first storage block; determining a candidate object associated with the target object using a search tree, the search tree indicating a hierarchical relation among a plurality of objects, wherein a first node corresponding to the target object and a second node corresponding to the candidate object share a same index node in the search tree; determining whether the candidate object was updated during a past predetermined time period; and in response to the candidate object not being updated during the past predetermined time period, moving the candidate object from a third storage block into a fourth storage block different from the third storage block.Type: GrantFiled: August 30, 2019Date of Patent: January 25, 2022Assignee: EMC IP HOLDING COMPANY LLCInventors: Lu Lei, Young Yangchun Wu
-
Patent number: 11221965Abstract: In a cache memory used for communication between a host and a memory, the cache memory may include a plurality of cache sets, each comprising: a valid bit; N dirty bits; a tag; and N data sets respectively corresponding to the N dirty bits and each including data of a data chunk size substantially identical to a data chunk size of the host, wherein a data chunk size of the memory is N times as large as the data chunk size of the host, where N is an integer greater than or equal to 2.Type: GrantFiled: February 27, 2019Date of Patent: January 11, 2022Assignee: SK hynix Inc.Inventors: Seung-Gyu Jeong, Dong-Gun Kim, Jung-Hyun Kwon, Young-Suk Moon
-
Patent number: 11144475Abstract: A computer program product, system, and method for managing adding of accessed tracks in cache to a most recently used end of a cache list. A cache list for the cache has a least recently used (LRU) end and a most recently used (MRU) end. Tracks in the cache are indicated in the cache list. A track in the cache indicated on the cache list is accessed. A determination is made as to whether a track cache residency time since the accessed track was last accessed while in the cache list is within a region of lowest track cache residency times. A flag is set for the accessed track indicating to indicate the track at the MRU end in response to determining that the track cache residency time of the accessed track is within the region of lowest track cache residency times. The accessed track remains at a current position in the cache list before being accessed after setting the flag.Type: GrantFiled: August 16, 2019Date of Patent: October 12, 2021Assignee: International Business Machines CorporationInventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew J. Kalos
-
Patent number: 11144639Abstract: Provided are a computer program product, system, and method for determining whether to destage write data in cache to storage based on whether the write data has malicious data. Write data for a storage is cached in a cache. A determination is made as to whether the write data in the cache comprises random data according to a randomness criteria. The write data in the cache to the storage in response to determining that the write data does not comprise random data according to the randomness criteria. The write data is processed as malicious data after determining that the write data comprises random data according to the randomness criteria.Type: GrantFiled: March 4, 2019Date of Patent: October 12, 2021Assignee: International Business Machines CorporationInventors: Matthew G. Borlick, Lokesh M. Gupta, Carol S. Mellgren, John G. Thompson
-
Patent number: 11138118Abstract: The sizes of cache partitions, in a partitioned cache, are dynamically adjusted by determining, for each request, how many cache misses will occur in connection with implementing the request against the cache partition. The cache partition associated with the current request is increased in size by the number of cache misses and one or more other cache partitions is decreased in size causing cache evictions to occur from the other cache partitions rather than from the current cache partition. The other cache partitions, that are to be decreased in size, may be determined by ranking the cache partitions according to frequency of use and selecting the least frequently used cache partition to be reduced in size.Type: GrantFiled: January 13, 2020Date of Patent: October 5, 2021Assignee: EMC IP Holding Company LLCInventors: Hugo de Oliveira Barbalho, Jonas Furtado Dias, Vinícius Michel Gottin
-
Patent number: 11119679Abstract: Data blocks of a memory sub-system that have been accessed by a host system can be determined. An access pattern associated with the data blocks by the host system can be determined. A spatial characteristic for each respective pair of the data blocks of the memory sub-system can be received. A data graph can be generated with nodes that are based on the access pattern associated with the data blocks of the memory sub-system and edge values between the nodes that are based on the spatial characteristic for each respective pair of the data blocks of the memory sub-system.Type: GrantFiled: August 2, 2019Date of Patent: September 14, 2021Assignee: Micron Technology, Inc.Inventors: Anirban Ray, Samir Mittal, Gurpreet Anand
-
Patent number: 11119921Abstract: State machine generation for a multi-buffer electronic system can include receiving, using a processor, a user input specifying a reader policy and a number of a plurality of buffers used by a reader and a writer of the multi-buffer electronic system. A state machine can be generated as a data structure. The state machine has a plurality of states determined based on the number of the plurality of buffers and the reader policy. The state machine allocates different buffers of the plurality of buffers to the reader in temporally accurate order over time. Each state can specify an allocation from the plurality of buffers to the reader and the writer. A state machine description including one or more program code components can be generated, where the one or more program components may be used in an implementation of the reader and an implementation of the writer.Type: GrantFiled: August 24, 2020Date of Patent: September 14, 2021Assignee: Xilinx, Inc.Inventor: Uday M. Hegde
-
Patent number: 11070650Abstract: A method, computer program product, and a computing system are provided for de-duplicating remote procedure calls at a client. In an implementation, the method may include generating a plurality of local pending remote procedure calls. The method may also include identifying a set of duplicate remote procedure calls among the plurality of remote procedure calls. The method may also include associating each remote procedure call within the set of duplicate remote procedure calls with one another. The method may also include executing a remote procedure call of the set of duplicate remote procedure calls. The method may further include providing a response for the remote procedure call of the set of duplicate remote procedure calls with the other remote procedure calls of the set of duplicate remote procedure calls.Type: GrantFiled: March 4, 2020Date of Patent: July 20, 2021Assignee: International Business Machines CorporationInventors: John T. Kohl, Shailaja S. Golikeri
-
Patent number: 11049570Abstract: A method for dynamically altering a writes-per-day classification of multiple storage drives is disclosed. In one embodiment, such a method monitors, within a storage environment, an amount of overprovisioning utilized by multiple storage drives. Each storage drive has a writes-per-day classification associated therewith. Based on the amount of overprovisioning, the method periodically modifies the writes-per-day classification of the storage drives. The method then reorganizes the storage drives within various storage groups (e.g., RAID arrays, storage tiers, workloads, etc.) based on their writes-per-day classification. For example, the method may place, as much as possible, storage drives of the same writes-per-day classification within the same storage groups. A corresponding system and computer program product are also disclosed.Type: GrantFiled: June 26, 2019Date of Patent: June 29, 2021Assignee: International Business Machines CorporationInventors: Lokesh M. Gupta, Matthew G. Borlick, Karl A. Nielsen, Micah Robison
-
Patent number: 11048667Abstract: A method for improving asynchronous data replication between a primary storage system and a secondary storage system is disclosed. In one embodiment, such a method includes monitoring, in a cache of the primary storage system, unmirrored data elements needing to be mirrored, but that have not yet been mirrored, from the primary storage system to the secondary storage system. The method maintains an LRU list designating an order in which data elements are demoted from the cache. The method determines whether a data element at an LRU end of the LRU list is an unmirrored data element. In the event the data element at the LRU end of the LRU list is an unmirrored data element, the method moves the data element to an MRU end of the LRU list. A corresponding system and computer program product are also disclosed.Type: GrantFiled: February 28, 2020Date of Patent: June 29, 2021Assignees: International Business, Machines CorporationInventors: Gail Spear, Lokesh M. Gupta, Kevin J. Ash, David B. Schreiber, Kyler A. Anderson
-
Patent number: 11030089Abstract: A portion of a logical block address to physical block address (“L2P”) translation map may be identified. A last snapshot of the portion of the L2P translation map may be identified. One or more write operations may be determined, where the write operations are associated with logical block addresses of the portion of the L2P translation map. The write operations may have been performed after the last snapshot of the portion of the L2P translation map was stored. An address on the portion of the L2P translation map may be updated by a processing device based on the determined one or more write operations and the last snapshot of the portion of the L2P translation map.Type: GrantFiled: September 28, 2018Date of Patent: June 8, 2021Assignee: Micron Technology, Inc.Inventors: Daniel A. Boals, Byron D. Harris, Karl D. Schuh, Amy L. Wohlschlegel
-
Patent number: 10990541Abstract: A controller controls an operation of a semiconductor memory device. The controller includes a cache buffer, a request analyzer, and a cache controller. The cache buffer stores multiple cache data. The request analyzer generates request information including information on a size of read data to be read. The cache controller determines an eviction policy of the multiple cache data, based on the size of the read data in the request information.Type: GrantFiled: July 25, 2019Date of Patent: April 27, 2021Assignee: SK hynix Inc.Inventor: Eu Joon Byun
-
Patent number: 10949109Abstract: An expansion cartridge (200) and a method for deduplicating the data chunks stored at a client device (100) using the expansion cartridge (200), (300) are claimed herein. As per the invention, the expansion cartridge (200) is attachable, externally, to client devices (100) carrying the electronic data files to be transferred, wherein the expansion cartridge (200) is characterized by a file management component (220), a chunk management component (240), a storage component (260), and a mirroring component (280), and wherein, the expansion cartridge (200) on being attached with the client devices (100) interfaces with a client side data historian (125) and a client side processor (150) in the client device (100) using interfacing options, including without limitation, Small Computer System Interfaces (SCSI), Fibre Channel (FC) Interface, Ethernet Interface, Advanced Technology Attachment (ATA) Interface or a combination thereof.Type: GrantFiled: June 19, 2018Date of Patent: March 16, 2021Assignee: ARC Document Solutions, LLCInventors: Rahul Roy, Srinivasa Rao Mukkamala, Himadri Majumder, Dipali Bhattacharya
-
Patent number: 10901916Abstract: Provided are a computer program product, system, and method for managing adding of accessed tracks to a cache list based on accesses to different regions of the cache list. A cache has a least recently used (LRU) end and a most recently used (MRU) end. A determination is made of a high access region of tracks from the MRU end of the cache list based on a number of accesses to the tracks in the high access region. A flag is set for an accessed track, indicating to indicate the accessed track at the MRU end upon processing the accessed track at the LRU end, in response to the determining the accessed track is in the high access region. After the setting the flag, the accessed track remains at a current position in the cache list before being accessed.Type: GrantFiled: August 16, 2019Date of Patent: January 26, 2021Assignee: International Business Machines CorporationInventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew J. Kalos
-
Patent number: 10891167Abstract: A method of protecting software in a computer system includes defining a memory fractionation configuration for an application software program in the computer system, fractionating at least one page of the application software program into fractions according to the memory fractionation configuration, and running the application in such a manner that, at any particular point in time when the application is running, at least a first one of the fractions is stored in a manner that is not accessible from a user space or a kernel space of the computer system.Type: GrantFiled: December 20, 2016Date of Patent: January 12, 2021Assignee: Siege Technologies, LLCInventor: Joseph James Sharkey
-
Patent number: 10872038Abstract: A system comprises a memory, a plurality of memory banks, and an organizer. The memory is configured to store elements of a matrix, wherein the elements are distributed into overlapping subgroups and each shares at least one element of the matrix with another overlapping subgroup. The plurality of memory banks is configured to store the overlapping subgroups, wherein the subgroups are distributed among the memory banks using a circular shifted pattern. The organizer is configured to read specific ones of the overlapping subgroups in the plurality of memory banks in a specified pattern associated with transposing the matrix.Type: GrantFiled: September 30, 2019Date of Patent: December 22, 2020Assignee: Facebook, Inc.Inventors: Krishnakumar Narayanan Nair, Ehsan Khish Ardestani Zadeh, Olivia Wu, Yuchen Hao
-
Patent number: 10838870Abstract: The described technology is generally directed towards caching and aggregated write operations based on predicted patterns of data transfer operations. According to an embodiment, a system can comprise a memory that can store computer executable components, and a processor that can execute the computer executable components stored in the memory. The components can comprise a pattern identifying component to identify a first pattern of data transfer operations performed on a data store, resulting in an identified first pattern, based on monitored data transfer operations. The components can further comprise a pattern predicting component to predict a second pattern of future data transfer operations performed on the data store, resulting in a predicted second pattern, based on the identified first pattern. The components can further comprise a host adapter to generate a data transfer operation to be performed on the data store based on the predicting the second pattern.Type: GrantFiled: April 17, 2019Date of Patent: November 17, 2020Assignee: EMC IP HOLDING COMPANY LLCInventors: Malak Alshawabkeh, Steven John Ivester, Ramesh Doddaiah, Kaustubh S. Sahasrabudhe
-
Patent number: 10802895Abstract: A method and an apparatus for determining a usage level of a memory device to notify a running application to perform memory reduction operations selected based on the memory usage level are described. An application calls APIs (Application Programming Interface) integrated with the application codes in the system to perform memory reduction operations. A memory usage level is determined according to a memory usage status received from the kernel of a system. A running application is associated with application priorities ranking multiple running applications statically or dynamically. Selecting memory reduction operations and notifying a running application are based on application priorities. Alternatively, a running application may determine a mode of operation to directly reduce memory usage in response to a notification for reducing memory usage without using API calls to other software.Type: GrantFiled: October 16, 2018Date of Patent: October 13, 2020Assignee: Apple Inc.Inventors: Matthew G. Watson, James Michael Magee
-
Patent number: 10776275Abstract: A method comprises a cache manager receiving reference attributes associated with network data and selecting a replacement data location of a cache to store cache-line data associated with the network data. The replacement data location is selected based on the reference attributes and an order of reference states stored in a replacement stack of the cache. The stored reference states are associated with respective cached-data stored in the cache and based on reference attributes associated with respective cached-data. The reference states are stored in the replacement stack based on a set of the reference attributes and the stored reference states. In response to receiving reference attributes, the cache manager can modify a stored reference state, determine a second order of the state locations, and store a reference state in the replacement stack based on the second order. A system can comprise a network computing element having a cache, a cache manager, and a replacement stack.Type: GrantFiled: January 31, 2019Date of Patent: September 15, 2020Assignee: International Business Machines CorporationInventors: Brian W. Thompto, Bernard C. Drerup, Mohit S. Karve
-
Patent number: 10761988Abstract: Aspects of the present disclosure relate to an apparatus comprising a data array having locality-dependent latency characteristics such that an access to an open unit of the data array has a lower latency than an access to a closed unit of the data array. Set associative cache indexing circuitry determines, in response to a request for data associated with a target address, a cache set index. Mapping circuitry identifies, in response to the index, a set of data array locations corresponding to the index, according to a mapping in which a given unit of the data array comprises locations corresponding to a plurality of consecutive indices, and at least two locations of the set of locations corresponding to the same index are in different units of the data array. Cache access circuitry accesses said data from one of the set of data array locations.Type: GrantFiled: September 26, 2018Date of Patent: September 1, 2020Assignee: Arm LimitedInventors: Radhika Sanjeev Jagtap, Nikos Nikoleris, Andreas Lars Sandberg, Stephan Diestelhorst
-
Patent number: 10691607Abstract: Disclosed are a method and a device for increasing the performance of processes through cache splitting in a computing device using a plurality of cores. According to the present invention, a cache splitting method for performing a cache splitting in a computing device comprises the steps of: identifying, among a plurality of processes being executed, a process generating a cache flooding; and controlling the process generating the cache flooding such that the process uses a cache of a limited size.Type: GrantFiled: July 29, 2016Date of Patent: June 23, 2020Assignees: Samsung Electronics Co., Ltd., Seoul National University R&DB FoundationInventors: Jinkyu Koo, Hyeonsang Eom, Myung Sun Kim, Hanul Sung
-
Patent number: 10684946Abstract: A method may include: partitioning data on an on-chip and/or an off-chip storage medium into different data blocks according to a pre-determined data partitioning principle, wherein data with a reuse distance less than a pre-determined distance threshold value is partitioned into the same data block; and a data indexing step for successively loading different data blocks to at least one on-chip processing unit according a pre-determined ordinal relation of a replacement policy, wherein the repeated data in a loaded data block being subjected to on-chip repetitive addressing. Data with a reuse distance less than a pre-determined distance threshold value is partitioned into the same data block, and the data partitioned into the same data block can be loaded on a chip once for storage, and is then used as many times as possible, so that the access is more efficient.Type: GrantFiled: August 9, 2016Date of Patent: June 16, 2020Assignee: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCEInventors: Qi Guo, Tianshi Chen, Yunji Chen
-
Patent number: 10664393Abstract: A memory includes a plurality of pages used as a cache area. A processor allocates a first storage resource to a first link indicating a first page group of the plurality of pages, and a second storage resource to a second link indicating a second page group of the plurality of pages. The processor uses the first and the second links, and processes access requests for accessing to the first and the second storage resources, in parallel.Type: GrantFiled: September 20, 2018Date of Patent: May 26, 2020Assignee: FUJITSU LIMITEDInventors: Takuro Kumabe, Motohiro Sakai, Keima Abe
-
Patent number: 10656945Abstract: Executing a Next Instruction Access Intent instruction by a computer. The processor obtains an access intent instruction indicating an access intent. The access intent is associated with an operand of a next sequential instruction. The access intent indicates usage of the operand by one or more instructions subsequent to the next sequential instruction. The computer executes the access intent instruction. The computer obtains the next sequential instruction. The computer executes the next sequential instruction, whose execution comprises, based on the access intent, adjusting one or more cache behaviors for the operand of the next sequential instruction.Type: GrantFiled: June 15, 2012Date of Patent: May 19, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Christian Jacobi, Chung-Lung Kevin Chum, Timothy J. Slegel, Gustav E. Sittmann, III, Cynthia Sittmann
-
Patent number: 10649901Abstract: A set-associative cache memory includes a plurality of ways and a plurality of congruence classes. Each of the plurality of congruence classes includes a plurality of members each belonging to a respective one of the plurality of ways. In the cache memory, a data structure records a history of an immediately previous N ways from which cache lines have been evicted. In response to receipt of a memory access request specifying a target address, a selected congruence class among a plurality of congruence classes is selected based on the target address. At least one member of the selected congruence class is removed as a candidate for selection for victimization based on the history recorded in the data structure, and a member from among the remaining members of the selected congruence class is selected. The cache memory then evicts the victim cache line cached in the selected member of the selected congruence class.Type: GrantFiled: August 3, 2017Date of Patent: May 12, 2020Assignee: International Business Machines CorporationInventors: Bernard Drerup, Guy L. Guthrie, Jeffrey Stuecheli, Phillip Williams
-
Patent number: 10630530Abstract: Embodiments of the present application provide a cache method, the cache method includes: receiving, from the cache core server, information about a Transmission Control Protocol (TCP) flow; determining, according to the information, whether the cache edge server stores content corresponding to the information; sending a migrate-out request to the cache core server based on that the cache edge server stores the content corresponding to the information; receiving a migrate-out response from the cache core server upon the sending of the migrate-out request; performing a TCP connection to user equipment according to the migrate-out response; and reading content corresponding to the connection from storage of the cache edge server according to a byte quantity, sent by the cache core server, of the content, and sending the content to the user equipment.Type: GrantFiled: June 25, 2017Date of Patent: April 21, 2020Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Zanfeng Yang, Tao Song, Xinyu Ge, Pinhua Zhao, Jianzhong Yu, Bo Zhou, Wentao Wang
-
Patent number: 10621104Abstract: Examples herein involve a variable cache. An example variable cache controller obtains cache lines corresponding to accesses of a non-volatile memory of a system, monitors access history of the non-volatile memory, determines a number of distinct objects accessed in the access history during a time period from the object information, and sets a size of a variable cache of the system based on the number of distinct objects accessed in the access history during the time period.Type: GrantFiled: September 25, 2015Date of Patent: April 14, 2020Assignee: Hewlett Packard Enterprise Development LPInventors: Pengcheng Li, Dhruva Chakrabarti