Entry Replacement Strategy Patents (Class 711/133)
-
Patent number: 11556478Abstract: A cache system may include a cache to store a plurality of cache lines in a write-back mode; dirty cache line counter circuitry to store a count of dirty cache lines in the cache, increment the count when a new dirty cache line is added to the cache, and decrement the count when an old dirty cache line is written-back from the cache; dirty cache line write-back tracking circuitry to store an ordering of the dirty cache lines in a write-back order; mapping circuitry to map the dirty lines into the ordering; and controller circuity to use the mapping circuity to identify an evicted dirty cache line in the ordering and remove the evicted dirty cache line from the ordering.Type: GrantFiled: October 30, 2020Date of Patent: January 17, 2023Assignee: Hewlett Packard Enterprise Development LPInventor: Frank R. Dropps
-
Patent number: 11550732Abstract: A method for maintaining statistics for data elements in a cache is disclosed. The method maintains a heterogeneous cache comprising a higher performance portion and a lower performance portion. The method maintains, within the lower performance portion, a ghost cache containing statistics for data elements that are currently contained in the heterogeneous cache, and data elements that have been demoted from the heterogeneous cache within a specified time interval. The method calculates a size of the ghost cache based on an amount of frequently accessed data that is stored in backend storage volumes behind the heterogeneous cache. The method alters the size of the ghost cache as the amount of frequently accessed data changes. A corresponding system and computer program product are also disclosed.Type: GrantFiled: February 22, 2020Date of Patent: January 10, 2023Assignee: International Business Machines CorporationInventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick
-
Patent number: 11544216Abstract: Example tiered storage systems, storage devices, and methods provide intelligent data access across tiered storage systems. An example system can comprise one or more computing devices, a file system, an object storage system comprising an object storage, and a data tiering application. The data tiering application is executable by one or more computing devices to perform operations comprising determining, using machine learning logic, a cluster of associated files stored in the file system; and archiving the cluster of associated files from the file system to the object storage coupled for electronic communication to the file system via a computer network.Type: GrantFiled: April 25, 2019Date of Patent: January 3, 2023Assignee: Western Digital Technologies, Inc.Inventors: Sanhita Sarkar, Kannan J. Somangili, Shanker Valipireddy
-
Patent number: 11520519Abstract: Provided herein may be a storage device and a method of operating the same. A memory controller may include a command processor configured to generate a flush command in response to a flush request and determine flush data chunks to be stored, a write operation controller configured to control memory devices to perform a first program operation of storing flush data chunks, and to perform a second program operation of storing data corresponding to a write request that is input later than the flush request, regardless of whether a response to the flush command has been provided to a host, and a flush response controller configured to, when the first program operation is completed, provide a response to the flush command to the host depending on whether responses to flush commands, input earlier than the flush command, have been provided to the host.Type: GrantFiled: July 16, 2019Date of Patent: December 6, 2022Assignee: SK hynix Inc.Inventors: Byung Jun Kim, Eu Joon Byun, Hye Mi Kang
-
Patent number: 11513947Abstract: Embodiments of the present disclosure relate to establishing and verifying an index file. The method for establishing an index file includes: in response to receiving a data block to be stored, determining first verification information for verifying the data block and a first storage address for storing the data block. This method further includes: based on the first verification information, determining an index entry for the data block and a second storage address for storing the index entry, wherein the index entry includes the first verification information and the first storage address, and the index entry will be included in the index file. This method further includes: based on the index entry and the second storage address, determining second verification information. This method further includes: based on the second verification information and historical verification information for the index file, determining third verification information for verifying the index file.Type: GrantFiled: May 31, 2020Date of Patent: November 29, 2022Assignee: EMC IP HOLDING COMPANY LLCInventors: Haitao Li, Jie Liu, Jian Wen, Chao Lin
-
Patent number: 11500784Abstract: A method is provided that includes searching tags in a tag group comprised in a tagged memory system for an available tag line during a clock cycle, wherein the tagged memory system includes a plurality of tag lines having respective tags and wherein the tags are divided into a plurality of non-overlapping tag groups, and searching tags in a next tag group of the plurality of tag groups for an available tag line during a next clock cycle when the searching in the tag group does not find an available tag line.Type: GrantFiled: August 4, 2021Date of Patent: November 15, 2022Assignee: TEXAS INSTRUMENTS INCORPORATEDInventor: Sureshkumar Govindaraj
-
Patent number: 11494306Abstract: Systems and methods are disclosed including a first memory component, a second memory component having a lower access latency than the first memory component and acting as a cache for the first memory component, and a processing device operatively coupled to the first and second memory components. The processing device can perform operations including receiving a data access operation and, responsive to determining that a data structure includes an indication of an outstanding data transfer of data associated with a physical address of the data access operation, determining whether an operation to copy the data, associated with the physical address, from the first memory component to the second memory component is scheduled to be executed. The processing device can further perform operations including determining to delay a scheduling of an execution of the data access operation until the operation to copy the data is executed.Type: GrantFiled: August 26, 2020Date of Patent: November 8, 2022Assignee: MICRON TECHNOLOGY, INC.Inventors: Horia C. Simionescu, Chung Kuang Chin, Paul Stonelake, Narasimhulu Dharanikumar Kotte
-
Patent number: 11487665Abstract: A first read request for data stored at a non-volatile memory is received by a primary storage controller. The data is programmed from the non-volatile memory to a first cache of the primary storage controller, the first cache to store the data over a first time range. A second read request is received for the data. In response to receiving the second read request for the data, the data is programmed to a second cache to store the data over a second time range that is greater than the first time range. A notification is transmitted to a secondary storage controller, the notification including information associated with the programming of the data to the second cache.Type: GrantFiled: August 27, 2019Date of Patent: November 1, 2022Assignee: Pure Storage, Inc.Inventors: Riley Thomasson, Manpreet Singh, Mohit Gupta, Joshua Freilich
-
Patent number: 11487452Abstract: In various embodiments, an electronic device may include a display, a memory including a first space storing no data and a second space storing data, and a processor. The processor may be configured to control the electronic device to: receive an input for inputting a setting value for a fast data storage mode of the memory, to allocate a predetermined size of a free space of a file system of the electronic device as a temporary memory space for the fast data storage mode based on the setting value for the fast data storage mode, to control the memory to allocate a predetermined size of the first space as a borrowed space for the fast data storage mode corresponding to the size of the temporary memory space, to recognize occurrence of an event for starting data storage through the fast data storage mode, and to control the memory to perform the data storage using the borrowed space through the fast data storage mode in response to the occurrence of the event.Type: GrantFiled: February 27, 2020Date of Patent: November 1, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Changheun Lee, Sungdo Moon
-
Patent number: 11461236Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for allocation in a victim cache system. An example apparatus includes a first cache storage, a second cache storage, a cache controller coupled to the first cache storage and the second cache storage and operable to receive a memory operation that specifies an address, determine, based on the address, that the memory operation evicts a first set of data from the first cache storage, determine that the first set of data is unmodified relative to an extended memory, and cause the first set of data to be stored in the second cache storage.Type: GrantFiled: May 22, 2020Date of Patent: October 4, 2022Assignee: Texas Instruments IncorporatedInventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
-
Patent number: 11463511Abstract: Software-based data planes for network function virtualization may use a modular approach in which network functions are implemented as modules that can be composed into service chains. Infrastructures that allow these modules to share central processing unit resources are particularly appealing since they support multi-tenancy or diverse service chains applied to different traffic classes. Systems, methods, and apparatuses introduce schemes for load balancing considering central processing unit utilization of a next hop device when processing a packet that uses a service chain.Type: GrantFiled: December 17, 2018Date of Patent: October 4, 2022Assignees: AT&T Intellectual Property I, L.P., The George Washington UniversityInventors: Abhigyan, Wei Zhang, Timothy Wood
-
Patent number: 11449235Abstract: The present technology relates to an electronic device. According to the present technology, a storage device having an improved operation speed may include a nonvolatile memory device, a main memory configured to temporarily store data related to controlling the nonvolatile memory device, and a memory controller configured to control the nonvolatile memory device and the main memory under control of an external host. The main memory may aggregate and process a number of write transactions having continuous addresses, among write transactions received from the memory controller, equal to a burst length unit of the main memory.Type: GrantFiled: August 12, 2020Date of Patent: September 20, 2022Assignee: SK hynix Inc.Inventors: Do Hun Kim, Kwang Sun Lee
-
Patent number: 11444769Abstract: A method of authenticating sensor data includes receiving, by at least a temporal attester, sensor data, calculating, by the at least a temporal attester, a current time, generating, by the at least a temporal attester, a secure timestamp generated as a function of the current time, and transmitting, by the at least a temporal attester and to at least a verifier, a temporally attested sensor signal including the secure timestamp.Type: GrantFiled: July 2, 2019Date of Patent: September 13, 2022Assignee: Ares Technologies, Inc.Inventor: Christian Wentz
-
Patent number: 11436016Abstract: A technique for determining whether a register value should be written to an operand cache or whether the register value should remain in and not be evicted from the operand cache is provided. The technique includes executing an instruction that accesses an operand that comprises the register value, performing one or both of a lookahead technique and a prediction technique to determine whether the register value should be written to an operand cache or whether the register value should remain in and not be evicted from the operand cache, and based on the determining, updating the operand cache.Type: GrantFiled: December 4, 2019Date of Patent: September 6, 2022Assignee: Advanced Micro Devices, Inc.Inventors: Anthony T. Gutierrez, Bradford M. Beckmann, Marcus Nathaniel Chow
-
Patent number: 11409672Abstract: A memory module includes at least two memory devices. Each of the memory devices perform verify operations after attempted writes to their respective memory cores. When a write is unsuccessful, each memory device stores information about the unsuccessful write in an internal write retry buffer. The write operations may have only been unsuccessful for one memory device and not any other memory devices on the memory module. When the memory module is instructed, both memory devices on the memory module can retry the unsuccessful memory write operations concurrently. Both devices can retry these write operations concurrently even though the unsuccessful memory write operations were to different addresses.Type: GrantFiled: May 12, 2020Date of Patent: August 9, 2022Assignee: Rambus Inc.Inventors: Hongzhong Zheng, Brent Haukness
-
Patent number: 11403263Abstract: Various embodiments disclose a method for maintaining file versions in volatile memory. The method includes storing, in volatile memory for at least a first portion of a first sync interval, a first version of a file that is not modifiable during the at least the first portion of the first sync interval. The method also includes storing, in volatile memory for at least a second portion of the first sync interval, a second version of the file that is modifiable during the at least the second portion of the first sync interval. The method also includes subsequent to the first sync interval, replacing in nonvolatile memory, a third version of the file with the first version of the file stored in volatile memory. Further, the method includes marking the second version of the file as not modifiable during at least a first portion of a second sync interval.Type: GrantFiled: June 5, 2019Date of Patent: August 2, 2022Assignee: NETFLIX, INC.Inventors: John David Blair, Anders Grindal Bakken
-
Patent number: 11402997Abstract: The present technology relates to an electronic device. According to the present technology, a storage device having an improved operation speed may include a nonvolatile memory device, a main memory configured to temporarily store data related to controlling the nonvolatile memory device, and a memory controller configured to control the nonvolatile memory device and the main memory under control of an external host. The main memory may aggregate and process a number of write transactions having continuous addresses, among write transactions received from the memory controller, equal to a burst length unit of the main memory.Type: GrantFiled: August 12, 2020Date of Patent: August 2, 2022Assignee: SK hynix Inc.Inventors: Do Hun Kim, Kwang Sun Lee
-
Patent number: 11403229Abstract: Methods, apparatus, systems and articles of manufacture to facilitate atomic operation in victim cache are disclosed. An example system includes a first cache storage to store a first set of data; a second cache storage to store a second set of data that has been evicted from the first cache storage; and a storage queue coupled to the first cache storage and the second cache storage, the storage queue including: an arithmetic component to: receive the second set of data from the second cache storage in response to a memory operation; and perform an arithmetic operation on the second set of data to produce a third set of data; and an arbitration manager to store the third set of data in the second cache storage.Type: GrantFiled: May 22, 2020Date of Patent: August 2, 2022Assignee: Texas Instruments IncorporatedInventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
-
Patent number: 11385935Abstract: Various embodiments provide an electronic device and a method, the electronic device comprising: a memory; a first processor; a second processor which has attributes different from those of the first processor; and a control unit, wherein the control unit is configured to identify a task loaded into the memory, select which of the first processor and the second processor is to execute the task, on the basis of attribute information corresponding to a user interaction associated with the task, and allocate the task to the selected processor. Other embodiments are also possible.Type: GrantFiled: July 24, 2020Date of Patent: July 12, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Kiljae Kim, Jaeho Kim, Daehyun Cho
-
Patent number: 11386006Abstract: Embodiments of the present disclosure generally relate to a target device handling overlap write commands. In one embodiment, a target device includes a non-volatile memory and a controller coupled to the non-volatile memory. The controller includes a random accumulated buffer, a sequential accumulated buffer, and an overlap accumulated buffer. The controller is configured to receive a new write command, classify the new write command, and write data associated with the new write command to one of the random accumulated buffer, the sequential accumulated buffer, or the overlap accumulated buffer. Once the overlap accumulated buffer becomes available, the controller first flushes to the non-volatile memory the data in the random accumulated buffer and the sequential accumulated buffer that was received prior in sequence to the data in the overlap accumulated buffer. The controller then flushes the available overlap accumulated buffer, ensuring that new write commands override prior write commands.Type: GrantFiled: July 9, 2020Date of Patent: July 12, 2022Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.Inventor: Shay Benisty
-
Patent number: 11379382Abstract: A method for demoting a selected storage element from a cache memory includes storing favored and non-favored storage elements within a higher performance portion and lower performance portion of the cache memory. The favored storage elements are retained in the cache memory longer than the non-favored storage elements. The method maintains a first favored LRU list and a first non-favored LRU list, associated with the favored and non-favored storage elements stored within the higher performance portion of the cache. The method selects a favored or non-favored storage element to be demoted from the higher performance portion of the cache memory according to life expectancy and residency of the oldest favored and non-favored storage elements in the first LRU lists. The method demotes the selected from the higher performance portion of the cache to the lower performance portion of the cache, or to the data storage devices, according to a cache demotion policy.Type: GrantFiled: December 8, 2020Date of Patent: July 5, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Lokesh M. Gupta, Kevin J. Ash, Beth A. Peterson, Matthew G. Borlick
-
Patent number: 11379427Abstract: A method for improving asynchronous data replication between a primary storage system and a secondary storage system is disclosed. In one embodiment, such a method includes monitoring, in a cache of the primary storage system, unmirrored data elements needing to be mirrored, but that have not yet been mirrored, from the primary storage system to the secondary storage system. The method maintains a regular LRU list designating an order in which data elements are demoted from the cache. The method determines whether a data element at an LRU end of the regular LRU list is an unmirrored data element. In the event the data element at the LRU end of the regular LRU list is an unmirrored data element, the method moves the data element to a transfer-pending LRU list dedicated to unmirrored data elements in the cache. A corresponding system and computer program product are also disclosed.Type: GrantFiled: February 28, 2020Date of Patent: July 5, 2022Assignee: International Business Machines CorporationInventors: Gail Spear, Lokesh M. Gupta, Kyler A. Anderson, David B. Schreiber, Kevin J. Ash
-
Patent number: 11373188Abstract: A co-processing fraud risk scoring system for scoring electronic payment transactions for potential fraud is described. Additionally, a method and a computer-readable storage medium for scoring electronic payment transactions for potential fraud are described.Type: GrantFiled: January 28, 2019Date of Patent: June 28, 2022Assignee: MASTERCARD INTERNATIONAL INCORPORATEDInventors: Srinivasarao Nidamanuri, Daryl Hurt
-
Patent number: 11354306Abstract: One or more client threads are executed. One or more processing threads corresponding to the one or more client threads are executed. The processing threads are configurable to generate statistical information for each database query statement processed by the corresponding client thread. The statistical information is generated from the processing threads. The statistical information is stored in chunks of memory managed via a plurality of queues. The chunks of memory containing the statistics are analyzed. Outlier statements are filtered based on the statistics. Non-outlier statements are stored by a storage device.Type: GrantFiled: September 9, 2019Date of Patent: June 7, 2022Assignee: safesforce.com, Inc.Inventor: Mark Wilding
-
Patent number: 11347400Abstract: The present technology relates to an electronic device. According to the present technology, a storage device having an improved operation speed may include a nonvolatile memory device, a main memory configured to temporarily store data related to controlling the nonvolatile memory device, and a memory controller configured to control the nonvolatile memory device and the main memory under control of an external host. The main memory may aggregate and process a number of write transactions having continuous addresses, among write transactions received from the memory controller, equal to a burst length unit of the main memory.Type: GrantFiled: August 12, 2020Date of Patent: May 31, 2022Assignee: SK hynix Inc.Inventors: Do Hun Kim, Kwang Sun Lee
-
Patent number: 11340830Abstract: Methods, systems, and devices for memory buffer management and bypass are described. Data corresponding to a page size of a memory array may be received at a virtual memory bank of a memory device, and a value of a counter associated with the virtual memory bank may be incremented. Upon determining that a value of the counter has reached a threshold value, the data may be communicated from the virtual memory bank to a buffer of the same memory device. For instance, the counter may be incremented based on the virtual memory bank receiving an access command from a host device.Type: GrantFiled: July 17, 2020Date of Patent: May 24, 2022Assignee: Micron Technology, Inc.Inventors: Robert Nasry Hasbun, Dean D. Gans, Sharookh Daruwalla
-
Patent number: 11340795Abstract: A snapshot lookup table (SLT) and snapshot pointer structure(s) (SPSs) may be provided for a logical data unit (LSU), each SPS entry corresponding to an LSU data portion and a physical storage location at which data is stored for the data portion for a particular snapshot. A current lookup table (CLT) for a current time may be provided for an LSU, including an entry for each LSU data that points to a respective entry of an SPS. Each time a first write following the creation of a snapshot is made to an LSU data portion, the corresponding CLT entry may be updated to point to the SPS entry that was updated to point to an LSU track table entry. To create a snapshot, a snapshot lookup table (SLT) is created for each snapshot, and the contents of the CLT are copied to the newly created SLT.Type: GrantFiled: May 28, 2020Date of Patent: May 24, 2022Assignee: EMC IP Holding Company LLCInventors: Jeffrey Wilson, Michael Ferrari, Mark J. Halstead, Sandeep Chandrashekara
-
Patent number: 11343284Abstract: In various embodiments, a data map generation system is configured to receive a request to generate a privacy-related data map for particular computer code, and, at least partially in response to the request, determine a location of the particular computer code, automatically obtain the particular computer code based on the determined location, and analyze the particular computer code to determine privacy-related attributes of the particular computer code, where the privacy-related attributes indicate types of personal information that the particular computer code collects or accesses. The system may be further configured to generate and display a data map of the privacy-related attributes to a user.Type: GrantFiled: May 31, 2021Date of Patent: May 24, 2022Assignee: OneTrust, LLCInventors: Kabir A. Barday, Mihir S. Karanjkar, Steven W. Finch, Ken A. Browne, Nathan W. Heard, Aakash H. Patel, Jason L. Sabourin, Richard L. Daniel, Dylan D. Patton-Kuhl, Jonathan Blake Brannon
-
Patent number: 11334495Abstract: A data processing apparatus is provided. It includes cache circuitry to store a plurality of items, each having an associated indicator. Processing circuitry executes instructions using at least some of the plurality of items. Fill circuitry inserts a new item into the cache circuitry. Eviction circuitry determines which of the plurality of items is to be a victim item based on the indicator, and evicts the victim item from the cache circuitry. Detection circuitry detects a state of the processing circuitry at a time that the new item is inserted into the cache circuitry, and sets the indicator in dependence on the state.Type: GrantFiled: August 23, 2019Date of Patent: May 17, 2022Assignee: Arm LimitedInventors: Joseph Michael Pusdesris, Yasuo Ishii
-
Patent number: 11334663Abstract: Embodiments are directed to a computer-implemented method for determining whether a program has been modified. The method can include determining that a first instance of the program is loaded in main memory. The method can further include determining a starting memory location of the first instance of the program. A second instance of the program is loaded into main memory. The second instance of the program is loaded such that memory references in the second instance of the program are resolved as if the second instance were loaded at the starting memory location of the first instance of the program. The first instance of the program is compared with the second instance of the program.Type: GrantFiled: July 19, 2017Date of Patent: May 17, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: John R. Eells, Mark A. Nelson
-
Patent number: 11334664Abstract: Embodiments are directed to a computer-implemented method for determining whether a program has been modified. The method can include determining that a first instance of the program is loaded in main memory. The method can further include determining a starting memory location of the first instance of the program. A second instance of the program is loaded into main memory. The second instance of the program is loaded such that memory references in the second instance of the program are resolved as if the second instance were loaded at the starting memory location of the first instance of the program. The first instance of the program is compared with the second instance of the program.Type: GrantFiled: November 7, 2017Date of Patent: May 17, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: John R. Eells, Mark A. Nelson
-
Patent number: 11314691Abstract: A method for improving asynchronous data replication between a primary storage system and a secondary storage system maintains a cache in the primary storage system. The cache includes a higher performance portion and a lower performance portion. The method monitors, in the cache, unmirrored data elements needing to be mirrored, but that have not yet been mirrored, from the primary storage system to the secondary storage system. The method maintains a regular LRU list designating an order in which data elements are demoted from the cache. The method determines whether a data element at an LRU end of the regular LRU list is an unmirrored data element. In the event the data element at the LRU end is an unmirrored data element, the method moves the data element from the higher performance portion to the lower performance portion. A corresponding system and computer program product are also disclosed.Type: GrantFiled: February 28, 2020Date of Patent: April 26, 2022Assignee: International Business Machines CorporationInventors: Lokesh M. Gupta, Matthew G. Borlick, Kevin J. Ash, Kyler A. Anderson
-
Patent number: 11301393Abstract: A data storage device may include a storage; and a controller, wherein the controller comprises: an address translator configured to generate multiple map data, each including a physical address of the storage corresponding to a logical address and multiple meta data for the multiple map data respectively; a descriptor cache manager configured to add new meta data to a storage area of a descriptor cache, the storage area for the new meta data being physically continuous with a storage area in which last meta data, of the multiple meta data, is stored and assign a head pointer and a tail pointer to select positions in the descriptor cache; a map cache manager configured to store the multiple map data in a map cache; and a map search component configured to search the descriptor cache according to a search range determined by the head pointer and the tail pointer.Type: GrantFiled: October 2, 2019Date of Patent: April 12, 2022Assignee: SK hynix Inc.Inventor: Joung Young Lee
-
Patent number: 11287995Abstract: Techniques for managing object pools within computer storage are disclosed. In some embodiments, a runtime environment determines, for a respective object pool allocated in volatile or non-volatile storage of a computing system, whether a new object has been added to the respective object pool within a threshold timeframe. Responsive to determining that a new object has not been added to the respective object pool within the threshold timeframe, the runtime environment removes a subset of one or more objects from the respective object pool. In some embodiments, the runtime environment monitors a set of one or more garbage collection metrics. Responsive to determining that the set of one or more garbage collection metrics have satisfied one or more thresholds, the runtime environment identifies and removes idle objects from at least one object pool.Type: GrantFiled: May 18, 2020Date of Patent: March 29, 2022Assignee: Oracle International CorporationInventor: Nathan Luther Reynolds
-
Patent number: 11281594Abstract: A method for maintaining statistics for data elements in a cache is disclosed. The method maintains a heterogeneous cache comprising a higher performance portion and a lower performance portion. The method maintains, within the lower performance portion, a ghost cache containing statistics for data elements that are currently contained in the heterogeneous cache, and data elements that have been demoted from the heterogeneous cache within a specified time interval. The method maintains updates to the statistics in an update area within the higher performance portion. The method determines whether the updates have reached a specified threshold and, in the event the updates have reached the specified threshold, flushes the updates from the update area to the ghost cache to update the statistics. A corresponding system and computer program product are also disclosed.Type: GrantFiled: February 22, 2020Date of Patent: March 22, 2022Assignee: International Business Machines CorporationInventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick
-
Patent number: 11281587Abstract: A method for managing a cache memory of a storage system, the method may include receiving, by a controller of the storage system, an access request related to a data unit; wherein the receiving occurs while (a) the cache memory stores a group of oldest cached data units, and (b) the data unit is stored in a memory module of the storage system the differs from the cache memory; determining, by the controller, a caching category of the data unit; and preventing from caching the data unit in the cache memory when a hit score of the caching category of the data unit is lower than a hit score of the group of oldest cached data units; and caching the data unit in the cache memory when the hit score of the caching category of the data unit is higher than the hit score of the group of oldest cached data units; wherein the hit score of the caching category of the data unit is indicative of a probability of a cache hit per data unit of the caching category.Type: GrantFiled: May 22, 2018Date of Patent: March 22, 2022Assignee: INFINIDAT LTD.Inventor: Yechiel Yochai
-
Patent number: 11275688Abstract: A processing system includes a plurality of compute units, with each compute unit having an associated first cache of a plurality of first caches, and a second cache shared by the plurality of compute units. The second cache operates to manage transfers of caches between the first caches of the plurality of first caches such that when multiple candidate first caches contain a valid copy of a requested cacheline, the second cache selects the candidate first cache having the shortest total path from the second cache to the candidate first cache and from the candidate first cache to the compute unit issuing a request for the requested cacheline.Type: GrantFiled: December 2, 2019Date of Patent: March 15, 2022Assignee: Advanced Micro Devices, Inc.Inventors: Sriram Srinivasan, John Kelley, Matthew Schoenwald
-
Patent number: 11275690Abstract: Techniques are disclosed for transferring a message between a sender agent and a receiver agent via a shared memory having a main memory and a cache. Feedback data indicative of a number of read messages in the shared memory is generated by the receiver agent. The feedback data is sent from the receiver agent to the sender agent. A number of unread messages in the shared memory is estimated by the sender agent based on the number of read messages. A threshold for implementing a caching policy is set by the sender agent based on the feedback data. The message is designated as cacheable if the number of unread messages is less than the threshold and as non-cacheable if the number of unread messages is greater than the threshold. The message is written to the shared memory based on the designation.Type: GrantFiled: August 17, 2020Date of Patent: March 15, 2022Assignee: Amazon Technologies, Inc.Inventors: Michael Zuzovski, Ofer Naaman, Adi Habusha
-
Patent number: 11243715Abstract: A memory controller for processing a flush request includes: a request message controller configured to generate a command or control signal in response to a request of a host, a delay time determiner configured to, when the request is a current flush request, generate delay information based on a number of write requests between a last received flush request before the current flush request and the received flush request and a response message controller configured to generate a flush response corresponding to the received flush request message based on the delay information.Type: GrantFiled: July 16, 2019Date of Patent: February 8, 2022Assignee: SK hynix Inc.Inventor: Sang Hune Jung
-
Patent number: 11237741Abstract: An electronic device and a control method for controlling a memory are provided.Type: GrantFiled: February 13, 2019Date of Patent: February 1, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Hyun Joon Cha, Chulmin Kim, Jaewon Kim, Taeho Kim, Sooyong Suk, Yongtaek Lee
-
Patent number: 11226778Abstract: Techniques manage metadata. Such techniques involve: in response to receiving a request for accessing metadata in a first page, determining, from a plurality of storage units including pages for storing metadata, a storage unit where the first page is located, the plurality of storage units including a first storage unit and a second storage unit, an access speed of the second storage unit exceeding an access speed of the first storage unit; accessing, from the determined storage unit, the first page for metadata; in response to the first page being accessed from the first storage unit, determining whether hotness of the first page exceeds a threshold level; and in response to the hotness of the first page exceeding the threshold level, transferring the first page from the first storage unit to the second storage unit. Accordingly, such techniques can improve the efficiency for accessing the metadata.Type: GrantFiled: March 17, 2020Date of Patent: January 18, 2022Assignee: EMC IP Holding Company LLCInventors: Zhenhua Zhao, Sihang Xia, Changyu Feng, Xinlei Xu
-
Patent number: 11210234Abstract: A processor includes a cache having two or more test regions and a larger non-test region. The processor further includes a cache controller that applies different cache replacement policies to the different test regions of the cache, and a performance monitor that measures performance metrics for the different test regions, such as a cache hit rate at each test region. Based on the performance metrics, the cache controller selects a cache replacement policy for the non-test region, such as selecting the replacement policy associated with the test region having the better performance metrics among the different test regions. The processor deskews the memory access measurements in response to a difference in the amount of accesses to the different test regions exceeding a threshold.Type: GrantFiled: October 31, 2019Date of Patent: December 28, 2021Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Paul Moyer, John Kelley
-
Patent number: 11200173Abstract: Techniques are disclosed relating to controlling cache size and priority of data stored in the cache using machine learning techniques. A software cache may store data for a plurality of different user accounts using one or more hardware storage elements. In some embodiments, a machine learning module generates, based on access patterns to the software cache, a control value that specifies a size of the cache and generates time-to-live values for entries in the cache. In some embodiments, the system evicts data based on the time-to-live values. The disclosed techniques may reduce cache access times and/or improve cache hit rate.Type: GrantFiled: November 23, 2020Date of Patent: December 14, 2021Assignee: PayPal, Inc.Inventor: Shanmugasundaram Alagumuthu
-
Patent number: 11200281Abstract: A technique for caching evidence for answering questions in a cache memory of a data processing system (that is configured to answer questions) includes receiving a first question. The first question is analyzed to identify a first set of characteristics of the first question. A first set of evidence for answering the first question is loaded into the cache memory. A second question is received. The second question is analyzed to identify a second set of characteristics of the second question. A portion of the first set of evidence, whose expected usage in answering the second question is below a determined threshold, is unloaded from the cache memory.Type: GrantFiled: March 24, 2016Date of Patent: December 14, 2021Assignee: International Business Machines CorporationInventors: Corville O. Allen, Bernadette A. Carter, Rahul Ghosh, Joseph N. Kozhaya
-
Patent number: 11182213Abstract: Managing applications in light of user use habits is disclosed. An indication that an application has been switched from a foreground environment to a background environment is detected. User historical use information associated with the application is accessed. Based at least in part on the user historical use information, it is determined that the application is not likely to be switched to the foreground environment. In response to the determination that the application is not likely to be switched to the foreground environment, one or more resources associated with the application are recycled.Type: GrantFiled: September 26, 2018Date of Patent: November 23, 2021Assignee: BANMA ZHIXING NETWORK (HONGKONG) CO., LIMITEDInventor: Bo Qiang
-
Patent number: 11184457Abstract: Systems and techniques for information-centric network data cache management are described herein. A demand metric may be calculated for a content item requested from an information-centric network (ICN). A resistance metric may be calculated for each cache node of a set of cache nodes in the ICN based on the demand metric. A topology of the set of cache nodes may be evaluated to identify a transmission cost for each cache node of the set of cache nodes. An influencer node may be selected from the set of cache nodes based on the resistance metric for the influencer node and the transmission cost for the influencer node. The content item may be cached in a data cache of the influencer node.Type: GrantFiled: June 27, 2019Date of Patent: November 23, 2021Assignee: Intel CorporationInventors: Ned M. Smith, Srikathyayani Srikanteswara, Kathiravetpillai Sivanesan, Eve M. Schooler, Satish Chandra Jha, Stepan Karpenko, Zongrui Ding, S M Iftekharul Alam, Yi Zhang, Kuilin Clark Chen, Gabriel Arrobo Vidal, Qian Li, Maria Ramirez Loaiza
-
Patent number: 11176201Abstract: A technique for caching evidence for answering questions in a cache memory of a data processing system (that is configured to answer questions) includes receiving a first question. The first question is analyzed to identify a first set of characteristics of the first question. A first set of evidence for answering the first question is loaded into the cache memory. A second question is received. The second question is analyzed to identify a second set of characteristics of the second question. A portion of the first set of evidence, whose expected usage in answering the second question is below a determined threshold, is unloaded from the cache memory.Type: GrantFiled: October 7, 2014Date of Patent: November 16, 2021Assignee: International Business Machines CorporationInventors: Corville O. Allen, Bernadette A. Carter, Rahul Ghosh, Joseph N. Kozhaya
-
Patent number: 11163698Abstract: A method for improving cache hit ratios for selected volumes when using synchronous I/O is disclosed. In one embodiment, such a method includes establishing, in cache, a first set of non-favored storage elements from non-favored storage areas. The method further establishes, in the cache, a second set of favored storage elements from favored storage areas. The method calculates a life expectancy for the non-favored storage elements to reside in the cache prior to eviction. The method further executes an eviction policy for the cache wherein the favored storage elements are maintained in the cache for longer than the life expectancy of the non-favored storage elements. A corresponding system and computer program product are also disclosed.Type: GrantFiled: May 12, 2019Date of Patent: November 2, 2021Assignee: International Business Machines CorporationInventors: Lokesh M. Gupta, Beth A. Peterson, Kevin J. Ash, Kyler A. Anderson
-
Patent number: 11163699Abstract: Techniques are provided for managing a least recently used cache using a linked list with a reduced memory footprint. A cache manager receives an I/O request comprising a target address, wherein the cache manager manages a cache memory having a maximum allocated amount of cache entries, and a linked list having a maximum allocated amount of list elements which is less than the maximum allocated amount of cache entries. If the target address does correspond to a cache entry, the cache manager accesses the cache entry to obtain the cache data from cache memory, removes a list element from the linked list, which corresponds to the accessed cache entry, selects an existing cache entry which currently does not have a corresponding list element in the linked list, and adds a list element to a head position of the linked list which corresponds to the selected cache entry.Type: GrantFiled: March 30, 2020Date of Patent: November 2, 2021Assignee: EMC IP Holding Company LLCInventors: Itay Keller, Zohar Lapidot, Neta Peleg
-
Patent number: 11157418Abstract: A method for improving cache hit ratios dedicates, within a cache, a portion of the cache to prefetched data elements. The method maintains a high priority LRU list designating an order in which high priority prefetched data elements are demoted, and a low priority LRU list designating an order in which low priority prefetched data elements are demoted. The method calculates, for the high priority LRU list, a first score based on a first priority and a first cache hit metric. The method calculates, for the low priority LRU list, a second score based on a second priority and a second cache hit metric. The method demotes, from the cache, a prefetched data element from the high priority LRU list or the low priority LRU list depending on which of the first score and the second score is lower. A corresponding system and computer program product are also disclosed.Type: GrantFiled: February 9, 2020Date of Patent: October 26, 2021Assignee: International Business Machines CorporationInventors: Lokesh M. Gupta, Matthew G. Borlick, Beth A. Peterson, Kyler A. Anderson