Of Parts Of Caches, E.g., Directory Or Tag Array, Etc. (epo) Patents (Class 711/E12.042)
  • Patent number: 11775174
    Abstract: Systems, methods, and computer-readable media for handling I/O operations in a storage system are described herein. An example method includes assigning each of a plurality of storage devices to one of a plurality of tiers; imposing a hierarchy on the tiers; creating a logical volume by reserving a portion of a storage capacity for the logical volume without allocating the portion of the storage capacity to the logical volume; and assigning the logical volume to one of a plurality of volume priority categories. The method includes receiving a write I/O operation directed to a logical unit of the logical volume; and allocating physical storage space for the logical unit of the logical volume in response to the write I/O operation. The physical storage space is located in one or more storage devices. The method includes writing data associated with the write I/O operation to the one or more storage devices.
    Type: Grant
    Filed: October 12, 2020
    Date of Patent: October 3, 2023
    Assignee: Amzetta Technologies, LLC
    Inventors: Paresh Chatterjee, Vijayarankan Muthirisavengopal, Sharon Samuel Enoch, Senthilkumar Ramasamy
  • Patent number: 9026734
    Abstract: According to one embodiment, a memory system includes: a memory area; a transfer processing unit that stores write data received from a host apparatus in the memory area; a delete notification buffer that accumulates a delete notification; and a delete notification processing unit. The delete notification processing unit collectively reads out a plurality of delete notifications from the delete notification buffer and classifies the read-out delete notifications for each unit area. The delete notification processing unit sequentially executes, for each unit area, processing for collectively invalidating write data related to one or more delete notifications classified in a same unit area and, in executing processing for one unit area in the processing sequentially executed for the each unit area, invalidates all write data stored in the one unit area after copying write data excluding write data to be invalidated stored in the one unit area to another unit area.
    Type: Grant
    Filed: December 6, 2011
    Date of Patent: May 5, 2015
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Daisuke Hashimoto
  • Patent number: 8838897
    Abstract: Technologies are generally described for exploiting program phase behavior to duplicate most recently and/or frequently accessed tag entries in a Tag Replication Buffer (TRB) to protect the information integrity of tag arrays in a processor cache. The reliability/effectiveness of microprocessor cache performance may be further improved by capturing/duplicating tags of dirty cache lines, exploiting the fact that detected error-corrupted clean cache lines can be recovered by L2 cache. A deterministic TRB replacement triggered early write-back scheme may provide full duplication and recovery of single-bit errors for tags of dirty cache lines.
    Type: Grant
    Filed: June 29, 2011
    Date of Patent: September 16, 2014
    Assignee: New Jersey Institute of Technology
    Inventors: Jie Hu, Shuai Wang
  • Patent number: 8762641
    Abstract: A method is described for use when a cache is accessed. Before all valid array entries are validated, a valid array entry is read when a data array entry is accessed. If the valid array entry is a first array value, access to the cache is treated as being invalid and the data array entry is reloaded. If the valid array entry is a second array value, a tag array entry is compared with an address to determine if the data array entry is valid or invalid. A valid control register contains a first control value before all valid array entries are validated and a second control value after all valid array entries are validated. After the second control value is established, reads of the valid array are disabled and the tag array entry is compared with the address to determine if a data array entry is valid or invalid.
    Type: Grant
    Filed: March 12, 2009
    Date of Patent: June 24, 2014
    Assignee: Qualcomm Incorporated
    Inventor: Arthur Joseph Hoane, Jr.
  • Patent number: 8700858
    Abstract: A method and system to allow power fail-safe write-back or write-through caching of data in a persistent storage device into one or more cache lines of a caching device. No metadata associated with any of the cache lines is written atomically into the caching device when the data in the storage device is cached. As such, specialized cache hardware to allow atomic writing of metadata during the caching of data is not required.
    Type: Grant
    Filed: May 16, 2012
    Date of Patent: April 15, 2014
    Assignee: Intel Corporation
    Inventor: Sanjeev N. Trika
  • Patent number: 8694728
    Abstract: Miss rate curves are constructed in a resource-efficient manner so that they can be constructed and memory management decisions can be made while the workloads are running. The resource-efficient technique includes the steps of selecting a subset of memory pages for the workload, maintaining a least recently used (LRU) data structure for the selected memory pages, detecting accesses to the selected memory pages and updating the LRU data structure in response to the detected accesses, and generating data for constructing a miss-rate curve for the workload using the LRU data structure. After a memory page is accessed, the memory page may be left untraced for a period of time, after which the memory page is retraced.
    Type: Grant
    Filed: November 9, 2010
    Date of Patent: April 8, 2014
    Assignee: VMware, Inc.
    Inventors: Carl A. Waldspurger, Rajesh Venkatasubramanian, Alexander Thomas Garthwaite, Yury Baskakov, Puneet Zaroo
  • Patent number: 8677050
    Abstract: According to one aspect of the present disclosure, a method and technique for using processor registers for extending a cache structure is disclosed. The method includes identifying a register of a processor, identifying a cache to extend, allocating the register as an extension of the cache, and setting an address of the register as corresponding to an address space in the cache.
    Type: Grant
    Filed: November 12, 2010
    Date of Patent: March 18, 2014
    Assignee: International Business Machines Corporation
    Inventors: Wen-Tzer T. Chen, Diane G. Flemming, William A. Maron, Mysore S. Srinivas, David B. Whitworth
  • Patent number: 8484411
    Abstract: A method and system for accessing a dynamic random access memory (DRAM) is provided. A memory controller includes a content addressable memory (CAM) based decision control module for determining a next best access request for the DRAM. The CAM based decision control module includes a CAM access storage module for storing access requests, a next access table module for storing the next best access request, and a decision logic module for determining the next best access request based on results from the CAM access storage module and the next access table module. Further, the memory controller includes a DRAM access control interface for implementing signaling required to access the DRAM. The method includes storing access requests in a CAM access storage module. The method includes determining which of the stored access requests is a next best access request. Further, the method includes processing the next best access request.
    Type: Grant
    Filed: December 31, 2008
    Date of Patent: July 9, 2013
    Assignee: Synopsys Inc.
    Inventors: Raghavan Menon, Raj Mahajan
  • Publication number: 20130097387
    Abstract: Aspects of various embodiments are directed to memory circuits, such as cache memory circuits. In accordance with one or more embodiments, cache-access to data blocks in memory is controlled as follows. In response to a cache miss for a data block having an associated address on a memory access path, data is fetched for storage in the cache (and serving the request), while one or more additional lookups are executed to identify candidate locations to store data. An existing set of data is moved from a target location in the cache to one of the candidate locations, and the address of the one of the candidate locations is associated with the existing set of data. Data in this candidate location may, for example, thus be evicted. The fetched data is stored in the target location and the address of the target location is associated with the fetched data.
    Type: Application
    Filed: October 15, 2012
    Publication date: April 18, 2013
    Applicant: The Board of Trustees of the Leland Stanford Junior University
    Inventor: The Board of Trustees of the Leland Stanford Juni
  • Publication number: 20130031309
    Abstract: A cache memory associated with a main memory and a processor capable of executing a dataflow processing task, includes a plurality of disjoint storage segments, each associated with a distinct data category. A first segment is dedicated to input data originating from a dataflow consumed by the processing task. A second segment is dedicated to output data originating from a dataflow produced by the processing task. A third segment is dedicated to global constants, corresponding to data available in a single memory location to multiple instances of the processing task.
    Type: Application
    Filed: October 5, 2012
    Publication date: January 31, 2013
    Applicant: Commissariat a l'Energie Atomique et aux Energies Alternatives
    Inventor: Commissariat a l'Energie Atomique et aux Energie
  • Patent number: 8341353
    Abstract: A system and method to access data from a portion of a level two memory or from a level one memory is disclosed. In a particular embodiment, the system includes a level one cache and a level two memory. A first portion of the level two memory is coupled to an input port and is addressable in parallel with the level one cache.
    Type: Grant
    Filed: January 14, 2010
    Date of Patent: December 25, 2012
    Assignee: QUALCOMM Incorporated
    Inventors: Suresh K. Venkumahanti, Christopher Edward Koob, Lucian Codrescu
  • Patent number: 8250305
    Abstract: Systems, methods and computer program products for data buffers partitioned from a cache array. An exemplary embodiment includes a method in a processor and for providing data buffers partitioned from a cache array, the method including clearing cache directories associated with the processor to an initial state, obtaining a selected directory state from a control register preloaded by the service processor, in response to the control register including the desired cache state, sending load commands with an address and data, loading cache lines and cache line directory entries into the cache and storing the specified data in the corresponding cache line.
    Type: Grant
    Filed: March 19, 2008
    Date of Patent: August 21, 2012
    Assignee: International Business Machines Corporation
    Inventors: Gary E. Strait, Deanna P. Dunn, Michael F. Fee, Pak-kin Mak, Robert J. Sonnelitter, III
  • Patent number: 8219755
    Abstract: In one embodiment, a cache comprises a tag memory and a comparator. The tag memory is configured to store tags of cache blocks stored in the cache, and is configured to output at least one tag responsive to an index corresponding to an input address. The comparator is coupled to receive the tag and a tag portion of the input address, and is configured to compare the tag to the tag portion to generate a hit/miss indication. The comparator comprises dynamic circuitry, and is coupled to receive a control signal which, when asserted, is defined to force a first result on the hit/miss indication independent of whether or not the tag portion matches the tag. The comparator also comprises circuitry coupled to receive the control signal and configured to inhibit a state change on an output of the dynamic circuitry during an evaluate phase of the dynamic circuitry to produce the first result responsive to an assertion of the control signal.
    Type: Grant
    Filed: August 1, 2011
    Date of Patent: July 10, 2012
    Assignee: Apple Inc.
    Inventor: Brian J. Campbell
  • Patent number: 8195891
    Abstract: A method and system to allow power fail-safe write-back or write-through caching of data in a persistent storage device into one or more cache lines of a caching device. No metadata associated with any of the cache lines is written atomically into the caching device when the data in the storage device is cached. As such, specialized cache hardware to allow atomic writing of metadata during the caching of data is not required.
    Type: Grant
    Filed: March 30, 2009
    Date of Patent: June 5, 2012
    Assignee: Intel Corporation
    Inventor: Sanjeev N. Trika
  • Patent number: 8015356
    Abstract: In one embodiment, a cache comprises a tag memory and a comparator. The tag memory is configured to store tags of cache blocks stored in the cache, and is configured to output at least one tag responsive to an index corresponding to an input address. The comparator is coupled to receive the tag and a tag portion of the input address, and is configured to compare the tag to the tag portion to generate a hit/miss indication. The comparator comprises dynamic circuitry, and is coupled to receive a control signal which, when asserted, is defined to force a first result on the hit/miss indication independent of whether or not the tag portion matches the tag. The comparator also comprises circuitry coupled to receive the control signal and configured to inhibit a state change on an output of the dynamic circuitry during an evaluate phase of the dynamic circuitry to produce the first result responsive to an assertion of the control signal.
    Type: Grant
    Filed: July 1, 2005
    Date of Patent: September 6, 2011
    Assignee: Apple Inc.
    Inventor: Brian J. Campbell
  • Patent number: 7930479
    Abstract: A system and method for caching and retrieving from cache transaction content elements. Metadata is stored in cache to describe content elements of a transaction, a data retrieval device determines, based on the metadata, whether cache contains a complete copy of a transaction associated with a requested content element, and the data retrieval device returns the requested content element from cache if the complete copy of the associated transaction is in cache.
    Type: Grant
    Filed: April 29, 2004
    Date of Patent: April 19, 2011
    Assignee: SAP AG
    Inventor: Noam Barda
  • Patent number: 7711902
    Abstract: A memory system is provided comprising a memory controller, a level 1 (L1) cache including L1 tag memory and L1 data memory, a level 2 (L2) cache coupled to the L1 cache, the L2 cache including L2 tag memory having a plurality of L2 tag entries and a L2 data memory having a plurality of L2 data entries. The L2 tag entries are more than the L2 data entries. In response to receiving a tag and an associated data, if L2 tag entries having corresponding L2 data entries are unavailable and if a first tag in a first L2 tag entry with an associated first data in a first L2 data entry has a more recent or duplicate value of the first data in the L1 data memory, the memory controller moves the first tag to a second L2 tag entry that does not have a corresponding L2 data entry, vacates the first L2 tag entry and the first L2 data entry and stores the received tag in the first L2 tag entry and the received data in the first L2 data entry.
    Type: Grant
    Filed: April 7, 2006
    Date of Patent: May 4, 2010
    Assignee: Broadcom Corporation
    Inventor: Fong Pong
  • Patent number: 7698509
    Abstract: A multiprocessing node has a plurality of point-to-point connected microprocessors. Each of the microprocessors is also point-to-point connected to a filter. In response to a local cache miss, a microprocessor issues a broadcast for the requested data to the filter. The filter, using memory that stores a copy of the tags of data stored in the local cache memories of each of the microprocessors, relays the broadcast to those/microprocessors having copies of the requested data. If the snoop filter memory indicates that none of the microprocessors have a copy of the requested data, the snoop filter may either (i) cancel the broadcast and issue a message back to the requesting microprocessor, or (ii) relay the broadcast to a connected multiprocessing node.
    Type: Grant
    Filed: July 13, 2004
    Date of Patent: April 13, 2010
    Assignee: Oracle America, Inc.
    Inventors: Michael J. Koster, Christopher L. Johnson, Brian W. O'Krafka
  • Publication number: 20090198901
    Abstract: A computer system includes a main memory for storing a large amount of data, a cache memory that can be accessed at a higher speed than the main memory, a memory replacement controller for controlling the replacement of data between the main memory and the cache memory, and a memory controller capable of allocating one or more divided portions of the cache memory to each process unit. The memory replacement controller stores priority information for each process unit, and replaces lines of the cache memory based on a replacement algorithm taking the priority information into consideration, wherein the divided portions of the cache memory are allocated so that the storage area is partially shared between process units, after which the allocated amounts of cache memory are changed automatically.
    Type: Application
    Filed: October 8, 2008
    Publication date: August 6, 2009
    Inventor: Yoshihiro Koga
  • Publication number: 20090172449
    Abstract: Disclosed herein are approaches to reducing a guardband (margin) used for minimum voltage supply (Vcc) requirements for memory such as cache.
    Type: Application
    Filed: December 26, 2007
    Publication date: July 2, 2009
    Inventors: Ming Zhang, Chris Wilkerson, Greg Taylor, Randy J. Aksamlt, James Tschanz
  • Publication number: 20090070532
    Abstract: A system and method for using a single test case to test each sector within multiple congruence classes is presented. A test case generator builds a test case for accessing each sector within a congruence class. Since a congruence class spans multiple congruence pages, the test case generator builds the test case over multiple congruence pages in order for the test case to test the entire congruence class. During design verification and validation, a test case executor modifies a congruence class identifier (e.g., patches a base register), which forces the test case to test a specific congruence class. By incrementing the congruence class identifier after each execution of the test case, the test case executor is able to test each congruence class in the cache using a single test case.
    Type: Application
    Filed: September 11, 2007
    Publication date: March 12, 2009
    Inventors: Vinod Bussa, Shubhodeep Roy Choudhury, Manoj Dusanapudi, Sunil Suresh Hatti, Shakti Kapoor, Batchu Naga Venkata Satyanarayana
  • Publication number: 20080275850
    Abstract: An appropriate tag is assigned to an image in comparatively simple fashion. An image of interest to be tagged is selected and tags that have already been assigned to the selected image of interest are displayed in a present-tag display area. Tags having a high frequency of appearance are extracted from among tags that have been assigned to images having tags identical with the tags that have already been assigned to the image of interest, these images being taken from among images that have been stored in an image database. The extracted tags are displayed in a tag candidate display area as candidate tags. Since the tags displayed in the tag candidate display area often are tags related to the selected image of interest, they are tags suitable for assignment to the image of interest.
    Type: Application
    Filed: March 13, 2008
    Publication date: November 6, 2008
    Inventor: Arito ASAI
  • Publication number: 20080133843
    Abstract: In one embodiment, a cache comprises a data memory comprising a plurality of data entries, each data entry having capacity to store a cache block of data, and a cache control unit coupled to the data memory. The cache control unit is configured to dynamically allocate a given data entry in the data memory to store a cache block being cached or to store data that is not being cache but is being staged for retransmission on an interface to which the cache is coupled.
    Type: Application
    Filed: November 30, 2006
    Publication date: June 5, 2008
    Inventors: Ruchi Wadhawan, Jason M. Kassoff, George Kong Yiu
  • Publication number: 20080133834
    Abstract: A method for handling a request of storage on a serial fabric comprising formatting an address for communication on a serial fabric into a plurality of fields including a field comprising at least one set selection bit and a field comprising at least one tag bit. The address is communicated on the serial fabric with the field comprising the at least one set selection bit communicated first.
    Type: Application
    Filed: December 5, 2006
    Publication date: June 5, 2008
    Inventors: Blaine D. Gaither, Verna Knapp