With Age List, E.g., Queue, Mru-lru List, Etc. (epo) Patents (Class 711/E12.072)
  • Patent number: 11907128
    Abstract: A technique for managing a storage system involves determining, in response to a first write operation on a first data block on a persistent storage device, whether a first group of data corresponding to the first data block is included in a cache; updating the first group of data in the cache if it is determined that the first group of data is included in the cache; and adding the first group of data to an associated data set of the cache to serve as a first record. Accordingly, such a technique can associatively manage different types of cached data corresponding to a data block, thereby optimizing the system performance.
    Type: Grant
    Filed: May 10, 2022
    Date of Patent: February 20, 2024
    Assignee: EMC IP Holding Company LLC
    Inventors: Ming Zhang, Chen Gong, Qiaosheng Zhou
  • Patent number: 11893269
    Abstract: A memory system includes a memory device and a controller. The memory device includes plural storage regions including plural non-volatile memory cells. The plural storage regions have a different data input/output speed. The controller is coupled to the memory device via at least one data path. The controller performs a readahead operation in response to a read request input from an external device, determines a data attribute regarding readahead data, obtained by the readahead operation, based on a time difference between reception of the read request and completion of the readahead operation, and stores the readahead data in one of the plural storage regions based on the data attribute.
    Type: Grant
    Filed: March 4, 2022
    Date of Patent: February 6, 2024
    Assignee: SK hynix Inc.
    Inventors: Jun Hee Ryu, Kwang Jin Ko, Young Pyo Joo
  • Patent number: 11829302
    Abstract: Examples described herein relate to In some examples, a least recently used (LRU) list used to evict nodes from a cache is traversed, the nodes referencing data blocks storing data in a storage device, the nodes being leaf nodes in a tree representing a file system, and for each traversed node, determining a stride length as an absolute value of a difference between a block offset value of the traversed node and a block offset value of a previously traversed node, comparing the stride length to a proximity threshold, and updating a sequential access pattern counter based at least in part on the comparing; and proactively prefetching nodes from the storage device to the cache when the sequential access pattern counter indicates a detected pattern of sequential accesses to nodes.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: November 28, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Annmary Justine Koomthanam, Jothivelavan Sivashanmugam
  • Patent number: 11797452
    Abstract: Various implementations described herein relate to systems and methods for dynamically managing buffers of a storage device, including receiving, by a controller of the storage device from a host, information indicative of a frequency by which data stored in the storage device is accessed, and in response to receiving the information determining, by the controller, the order by which read buffers of the storage device are allocated for a next read command. The NAND read count of virtual Word-Lines (WLs) are also used to cache more frequently accessed WLs, thus proactively reducing read disturb and consequently increasing NAND reliability and NAND life.
    Type: Grant
    Filed: July 18, 2022
    Date of Patent: October 24, 2023
    Assignee: KIOXIA CORPORATION
    Inventors: Saswati Das, Manish Kadam, Neil Buxton
  • Patent number: 11758203
    Abstract: Devices, computer-readable media, and methods for making a cache admission decision regarding a video chunk are described. For instance, a processing system including at least one processor may obtain a request for a first chunk of a first video, determine that the first chunk is not stored in a cache, and apply, in response to the determining that the first chunk is not stored in the cache, a classifier to predict whether the first chunk will be re-requested within a time horizon, where the classifier is trained in accordance with a set of features associated with a plurality of chunks of a plurality of videos. When it is predicted via the classifier that the first chunk will be re-requested within the time horizon, the processing system may store the first chunk in the cache.
    Type: Grant
    Filed: December 13, 2019
    Date of Patent: September 12, 2023
    Assignees: AT&T Intellectual Property I, L.P., University of Southern California
    Inventors: Shuai Hao, Subhabrata Sen, Emir Halepovic, Zahaib Akhtar, Ramesh Govindan, Yaguang Li
  • Patent number: 11726922
    Abstract: Methods, systems, and computer program products for memory protection in hypervisor environments are provided herein. A method includes maintaining, by a memory management layer of a hypervisor environment, a blockchain-based hash chain associated with a page table of the memory management layer, the page table corresponding to a plurality of memory pages; and verifying, by the first memory management layer, content obtained in connection with a read operation for a given one of the plurality of memory pages based at least in part on hashes maintained for the given memory page in the blockchain-based hash chain.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: August 15, 2023
    Assignee: International Business Machines Corporation
    Inventors: Akshar Kaul, Krishnasuri Narayanam, Ken Kumar, Pankaj S. Dayama
  • Patent number: 11567878
    Abstract: An apparatus to facilitate data cache security is disclosed. The apparatus includes a cache memory to store data; and prefetch hardware to pre-fetch data to be stored in the cache memory, including a cache set monitor hardware to determine critical cache addresses to monitor to determine processes that retrieve data from the cache memory; and pattern monitor hardware to monitor cache access patterns to the critical cache addresses to detect potential side-channel cache attacks on the cache memory by an attacker process.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: January 31, 2023
    Assignee: Intel Corporation
    Inventors: Abhishek Basak, Erdem Aktas
  • Patent number: 11561712
    Abstract: The present technology relates to an electronic device. According to the present technology, a storage device having an improved physical address obtainment speed may include a nonvolatile memory device configured to store map data including a plurality of map segments including mapping information and, a volatile memory device including a first map cache area temporarily storing the map data configured by map entries each corresponding to one logical address, and a second map cache area temporarily storing the map data configured by map indexes each corresponding to a plurality of logical addresses.
    Type: Grant
    Filed: April 23, 2021
    Date of Patent: January 24, 2023
    Assignee: SK hynix Inc.
    Inventor: Eu Joon Byun
  • Patent number: 11467964
    Abstract: A system includes a first counter configured to increment or decrement in response to a triggering event. The first counter is sized to overflow. The system also includes a second counter configured to increment or decrement in response to a triggering event. The first counter and the second counter are merged to form a third counter in response to detecting an overflow triggering event for the first counter. A merge bit indicative of whether the first counter and the second counter are merged changes value in response to merging the first counter and the second counter.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: October 11, 2022
    Assignee: Marvell Asia Pte Ltd
    Inventors: Nagesh Bangalore Lakshminarayana, Pranith Kumar Denthumdas, Rabin Sugumar
  • Patent number: 11394690
    Abstract: A hypervisor receives an outbound network packet from a first virtual machine for secure communication with a second virtual machine, wherein the network packet includes source and destination logical network addresses, a first payload, and a network packet integrity value determined using a cryptographic session key for a current secure session between the first and second virtual machines. The hypervisor transforms the outbound network packet by replacing the logical network addresses with current physical network addresses and subsequently recalculating the network packet integrity value. The transformed outbound network packet in then transmitted onto a network for delivery to the second virtual machine.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: July 19, 2022
    Inventors: Bogdan Cosmin Chifor, Andrei Ion Bunghez
  • Patent number: 8930630
    Abstract: The present disclosure relates to a cache memory controller for controlling a set-associative cache memory, in which two or more blocks are arranged in the same set, the cache memory controller including a content modification status monitoring unit for monitoring whether some of the blocks arranged in the same set of the cache memory have been modified in contents, and a cache block replacing unit for replacing a block, which has not been modified in contents, if some of the blocks arranged in the same set have been modified in contents.
    Type: Grant
    Filed: September 2, 2009
    Date of Patent: January 6, 2015
    Assignee: Sejong University Industry Academy Cooperation Foundation
    Inventor: Gi Ho Park
  • Patent number: 8856278
    Abstract: A system and method for storage and retrieval of pervasive and mobile content is provided. System may be comprised of a controller and a plurality of storage devices. Plurality of storage devices may include a first storage device located in a first geographic location and a second storage device located in a second geographic location. The controller may be operably connected to each storage device. The controller may also be capable of locating a first storage device containing data and transferring the data between the first storage device and a second storage device. The second storage device may be capable of transferring data to a host, which may be operably connected to the second storage device.
    Type: Grant
    Filed: November 16, 2005
    Date of Patent: October 7, 2014
    Assignee: Netapp, Inc.
    Inventor: Manu Rohani
  • Patent number: 8769195
    Abstract: A save control section included in a storage apparatus continuously performs writeback by which a data group is read out from a plurality of storage sections of the storage apparatus and by which the data group is saved in a data group storage section of the storage apparatus, or staging by which a data group saved in the data group storage section is distributed and stored in the plurality of storage sections according to storage areas of the data group storage section which store a plurality of data groups. An output section of the storage apparatus outputs in block a data group including the data stored in each of the plurality of storage sections. The data group storage section has the storage areas for storing a data group.
    Type: Grant
    Filed: January 19, 2011
    Date of Patent: July 1, 2014
    Assignee: Fujitsu Limited
    Inventors: Hidenori Yamada, Takashi Kawada, Yoshinari Shinozaki, Shinichi Nishizono, Koji Uchida
  • Patent number: 8533425
    Abstract: A shared resource management system and method are described. In one embodiment, a shared resource management system facilitates age based miss replay. In one exemplary implementation, a shared resource management system includes a plurality of engines, and a shared resource a shared resource management unit. The plurality of engines perform processing. The shared resource supports the processing. The shared resource management unit handles multiple outstanding miss requests.
    Type: Grant
    Filed: November 1, 2006
    Date of Patent: September 10, 2013
    Assignee: Nvidia Corporation
    Inventor: Lingfeng Yuan
  • Patent number: 8499117
    Abstract: A method for writing and reading data in memory cells, comprises the steps of: defining a virtual memory, defining write commands and read commands of data (DT) in the virtual memory, providing a first nonvolatile physical memory zone (A1), providing a second nonvolatile physical memory zone (A2), and, in response to a write command of an initial data, searching for a first erased location in the first memory zone, writing the initial data (DT1a) in the first location (PB1(DPP0)), and writing, in the metadata (DSC0) an information (DS(PB1)) allowing the first location to be found and an information (LPA, DS(PB1)) forming a link between the first location and the location of the data in the virtual memory.
    Type: Grant
    Filed: September 21, 2010
    Date of Patent: July 30, 2013
    Assignee: STMicroelectronics (Rousset) SAS
    Inventor: Hubert Rousseau
  • Patent number: 8495333
    Abstract: A system including a communication interface, a memory, and a processor. The communication interface is configured to receive data. The memory is divided into a first retention region and a second retention region, wherein the first retention region is configured to store data for a first predetermined period of time, and the second retention region is configured to store data for a second predetermined period of time. The processor is configured to i) initially store, within the first retention region of the memory, the data that is received, and ii) in response to the data that is received having been stored in the first retention region of the memory for a time limit that exceeds the first predetermined period of time, transfer the data that is received from the first retention region of the memory to the second retention region of the memory.
    Type: Grant
    Filed: July 13, 2012
    Date of Patent: July 23, 2013
    Assignee: Marvell International Ltd.
    Inventors: Mark Montierth, Randall Briggs, Douglas Keithley, David Bartle
  • Publication number: 20120303904
    Abstract: Provided are a computer program product, system, and method for managing unmodified tracks maintained in both a first cache and a second cache. The first cache has unmodified tracks in the storage subject to Input/Output (I/O) requests. Unmodified tracks are demoted from the first cache to a second cache. An inclusive list indicates unmodified tracks maintained in both the first cache and a second cache. An exclusive list indicates unmodified tracks maintained in the second cache but not the first cache. The inclusive list and the exclusive list are used to determine whether to promote to the second cache an unmodified track demoted from the first cache.
    Type: Application
    Filed: May 21, 2012
    Publication date: November 29, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin J. Ash, Michael T. Benhase, Lokesh M. Gupta, Matthew J. Kalos, Keneth W. Todd
  • Patent number: 8312241
    Abstract: Within a serial buffer, request packets are written to available memory blocks of a memory buffer, which are identified by a free buffer pointer list. When a request packet is written to a memory block, the memory block is removed from the free buffer pointer list, and added to a used buffer pointer list. Memory blocks in the used buffer pointer list are read, thereby transmitting the associated request packets from the serial buffer. When a request packet is read from a memory block, the memory block is removed from the used buffer pointer list and added to a request buffer pointer list. If a corresponding response packet is received within a timeout period, the memory block is transferred from the request buffer pointer list to the free buffer pointer list. Otherwise, the memory block is transferred from the request buffer pointer list to the used buffer pointer list.
    Type: Grant
    Filed: March 6, 2008
    Date of Patent: November 13, 2012
    Assignee: Integrated Device Technology, inc.
    Inventors: Chi-Lie Wang, Jason Z. Mo
  • Patent number: 8230197
    Abstract: Imaging devices incorporating semi-volatile memory are described herein. According to various embodiments, a communication interface may receive image data that is stored in a NAND flash memory device divided into three regions. Other embodiments may be described and claimed.
    Type: Grant
    Filed: February 15, 2012
    Date of Patent: July 24, 2012
    Assignee: Marvell International Ltd.
    Inventors: Mark D. Montierth, Randall D. Briggs, Douglas G. Keithley, David A. Bartle
  • Patent number: 8122220
    Abstract: Imaging devices incorporating semi-volatile memory are described herein. According to various embodiments, a communication interface may receive image data that is stored in a semi-volatile NAND flash memory device divided into three regions. Other embodiments may be described and claimed.
    Type: Grant
    Filed: December 14, 2007
    Date of Patent: February 21, 2012
    Assignee: Marvell International Ltd.
    Inventors: Mark D. Montierth, Randall D. Briggs, Douglas G. Keithley, David A. Bartle
  • Patent number: 8117396
    Abstract: Methods and apparatuses provide a multi-level buffer cache having queues corresponding to different priority levels of queuing within the buffer cache. One or more data blocks are buffered in the buffer cache. In one embodiment, an initial level of queue is identified for a data block to be buffered in the buffer cache. The initial level of queue can be modified higher or lower depending on a value of a cache property associated with the data block. In one embodiment, the data block is monitored for data access in a queue, and the data block is aged and moved to higher level(s) of queuing based on rules for the data block. The rules can apply to the queue in which the data block is buffered, to a data type of the data block, or to a logical partition to which the data block belongs.
    Type: Grant
    Filed: October 10, 2006
    Date of Patent: February 14, 2012
    Assignee: Network Appliance, Inc.
    Inventors: Robert L. Fair, Matti A. Vanninen
  • Patent number: 7996621
    Abstract: According to embodiments of the invention, a step value and a step-interval cache coherency protocol may be used to update and invalidate data stored within cache memory. A step value may be an integer value and may be stored within a cache directory entry associated with data in the memory cache. Upon reception of a cache read request, along with the normal address comparison to determine if the data is located within the cache a current step value may be compared with the stored step value to determine if the data is current. If the step values match, the data may be current and a cache hit may occur. However, if the step values do not match, the requested data may be provided from another source. Furthermore, an application may update the current step value to invalidate old data stored within the cache and associated with a different step value.
    Type: Grant
    Filed: July 12, 2007
    Date of Patent: August 9, 2011
    Assignee: International Business Machines Corporation
    Inventors: Jeffrey Douglas Brown, Russell Dean Hoover, Eric Oliver Mejdrich, Kenneth Michael Valk
  • Patent number: 7970997
    Abstract: A program section layout method capable of improving space efficiency of a cache memory. A grouping unit groups program sections into section groups so that the total size of the program sections composing each section group does not exceed cache memory size. A layout optimization unit optimizes the layout of section group storage regions by combining each section group and a program section that does not belong to any section groups or by combining section groups while keeping the ordering relations of the program sections composing each section group.
    Type: Grant
    Filed: December 23, 2004
    Date of Patent: June 28, 2011
    Assignee: Fujitsu Semiconductor Limited
    Inventor: Manabu Watanabe
  • Publication number: 20110066785
    Abstract: A memory management system and method include and use a cache buffer (such as a table look-aside buffer, TLB), a memory mapping table, a scratchpad cache, and a memory controller. The cache buffer is configured to store a plurality of data structures. The memory mapping table is configured to store a plurality of addresses of the data structures. The scratchpad cache is configured to store the base address of the data structures. The memory controller is configured to control reading and writing in the cache buffer and the scratchpad cache. The components are operable together under control of the memory controller to facilitate effective searching of the data structures in the memory management system.
    Type: Application
    Filed: January 27, 2010
    Publication date: March 17, 2011
    Applicant: VIA TECHNOLOGIES, INC.
    Inventors: JIAN LI, JIIN LAI, SHAN-NA PANG, ZHI-QIANG HUI, DI DAI
  • Patent number: 7840751
    Abstract: Apparatus and method for command queue management of back watered requests. A selected request is released from a command queue, and further release of requests from the queue is interrupted when a total number of subsequently completed requests reaches a predetermined threshold.
    Type: Grant
    Filed: June 29, 2007
    Date of Patent: November 23, 2010
    Assignee: Seagate Technology LLC
    Inventors: Clark Edward Lubbers, Robert Michael Lester
  • Publication number: 20100191925
    Abstract: A method, system, and computer program product for managing modified metadata in a storage controller cache pursuant to a recovery action by a processor in communication with a memory device is provided. A count of modified metadata tracks for a storage rank is compared against a predetermined criterion. If the predetermined criterion is met, a storage volume having the storage rank is designated with a metadata invalidation flag to defer metadata invalidation of the modified metadata tracks until after the recovery action is performed.
    Type: Application
    Filed: January 28, 2009
    Publication date: July 29, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lawrence Carter BLOUNT, Lokesh Mohan GUPTA, Carol Santich MELLGREN, Kenneth Wayne TODD
  • Publication number: 20100169588
    Abstract: A method and system writes data to a memory device including writing data to varying types of physical write blocks. The method includes receiving a request to write data for a logical block address within an LBA range to the memory device. Depending on whether the quantity of valid data in the memory device meets a predetermined criteria, the data is written to a specific chaotic block, a general chaotic block, or a mapped block. The mapped block is assigned for writing data for the LBA range, the specific chaotic block is assigned for writing data for contiguous LBA ranges including the LBA range, and the general chaotic block is assigned for writing data for any LBA range. Lower fragmentation and write amplification ratios may result by using this method and system.
    Type: Application
    Filed: December 30, 2008
    Publication date: July 1, 2010
    Inventor: Alan W. Sinclair
  • Patent number: 7689776
    Abstract: Systems and methods for the implementation of more efficient cache locking mechanisms are disclosed. These systems and methods may alleviate the need to present both a virtual address (VA) and a physical address (PA) to a cache mechanism. A translation table is utilized to store both the address and the locking information associated with a virtual address, and this locking information is passed to the cache along with the address of the data. The cache can then lock data based on this information. Additionally, this locking information may be used to override the replacement mechanism used with the cache, thus keeping locked data in the cache. The translation table may also store translation table lock information such that entries in the translation table are locked as well.
    Type: Grant
    Filed: June 6, 2005
    Date of Patent: March 30, 2010
    Assignees: Kabushiki Kaisha Toshiba, International Business Machines Corporation
    Inventors: Takeki Osanai, Kimberly Fernsler
  • Patent number: 7664916
    Abstract: Methods and apparatuses are provided for use with smartcards or other like shared computing resources. A global smartcard cache is maintained on one or more computers to reduce the burden on the smartcard. The global smartcard cache data is associated with a freshness indicator that is compared to the current freshness indicator from the smartcard to verify that the cached item data is current.
    Type: Grant
    Filed: January 6, 2004
    Date of Patent: February 16, 2010
    Assignee: Microsoft Corporation
    Inventors: Daniel C. Griffin, Eric C. Perlin, Klaus U. Schutz
  • Publication number: 20090228667
    Abstract: A method to perform a least recently used (LRU) algorithm for a co-processor is described, which co-processor in order to directly use instructions of a core processor and to directly access a main storage by virtual addresses of said core processor comprises a TLB for virtual to absolute address translations plus a dedicated memory storage also including said TLB, wherein said TLB consists of at least two zones which can be assigned in a flexible manner more than one at a time. Said method to perform a LRU algorithm is characterized in that one or more zones are replaced dependent on an actual compression service call (CMPSC) instruction.
    Type: Application
    Filed: March 6, 2009
    Publication date: September 10, 2009
    Applicant: International Business Machines Corporation
    Inventors: Thomas Koehler, Siegmund Schlechter
  • Publication number: 20090193205
    Abstract: A method of regeneration of a recording state of digital data stored in a node of a data network, the method including the steps of classifying files stored in the node, periodically writing a digital file from the node to a temporary memory, the temporary memory being a component of said node, and writing the digital file from the temporary memory to the same node.
    Type: Application
    Filed: July 2, 2008
    Publication date: July 30, 2009
    Applicant: ATM S.A.
    Inventor: Jerzy Piotr Walczak
  • Publication number: 20090055595
    Abstract: Provided are a method, system, and article of manufacture for adjusting parameters used to prefetch data from storage into cache. Data units are added from a storage to a cache, wherein requested data from the storage is returned from the cache. A degree of prefetch is processed indicating a number of data units to prefetch into the cache. A trigger distance is processed indicating a prefetched trigger data unit in the cache. The number of data units indicated by the degree of prefetch is prefetched in response to processing the trigger data unit. The degree of prefetch and the trigger distance are adjusted based on a rate at which data units are accessed from the cache.
    Type: Application
    Filed: August 22, 2007
    Publication date: February 26, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Binny Sher Gill, Luis Angel Daniel Bathen, Steven Robert Lowe, Thomas Charles Jarvis
  • Patent number: 7409461
    Abstract: Methods and systems consistent with the present invention provide broadband subscribers with dynamic, automatic routing information for accessing services offered by a service provider network (35) and/or content provider networks (40, 45). Client software on the subscriber's computer (10) retrieves a file such as an HyperText Markup Language (HTML) document from a predetermined server containing connection-oriented routing information for gaining access to various network services. The client software parses the HTML document and extracts the routing information. The client software then uses this routing information to populate and manipulate the subscriber computer (10) routing table.
    Type: Grant
    Filed: November 6, 2002
    Date of Patent: August 5, 2008
    Assignee: Efficient Networks, Inc.
    Inventors: Akkamapet P. Sundarraj, James R. Pickering, Douglas Moe, Melvin Paul Perinchery