Of Parts Of Caches, E.g., Directory Or Tag Array, Etc. (epo) Patents (Class 711/E12.042)
-
Patent number: 11775174Abstract: Systems, methods, and computer-readable media for handling I/O operations in a storage system are described herein. An example method includes assigning each of a plurality of storage devices to one of a plurality of tiers; imposing a hierarchy on the tiers; creating a logical volume by reserving a portion of a storage capacity for the logical volume without allocating the portion of the storage capacity to the logical volume; and assigning the logical volume to one of a plurality of volume priority categories. The method includes receiving a write I/O operation directed to a logical unit of the logical volume; and allocating physical storage space for the logical unit of the logical volume in response to the write I/O operation. The physical storage space is located in one or more storage devices. The method includes writing data associated with the write I/O operation to the one or more storage devices.Type: GrantFiled: October 12, 2020Date of Patent: October 3, 2023Assignee: Amzetta Technologies, LLCInventors: Paresh Chatterjee, Vijayarankan Muthirisavengopal, Sharon Samuel Enoch, Senthilkumar Ramasamy
-
Patent number: 9026734Abstract: According to one embodiment, a memory system includes: a memory area; a transfer processing unit that stores write data received from a host apparatus in the memory area; a delete notification buffer that accumulates a delete notification; and a delete notification processing unit. The delete notification processing unit collectively reads out a plurality of delete notifications from the delete notification buffer and classifies the read-out delete notifications for each unit area. The delete notification processing unit sequentially executes, for each unit area, processing for collectively invalidating write data related to one or more delete notifications classified in a same unit area and, in executing processing for one unit area in the processing sequentially executed for the each unit area, invalidates all write data stored in the one unit area after copying write data excluding write data to be invalidated stored in the one unit area to another unit area.Type: GrantFiled: December 6, 2011Date of Patent: May 5, 2015Assignee: Kabushiki Kaisha ToshibaInventor: Daisuke Hashimoto
-
Patent number: 8838897Abstract: Technologies are generally described for exploiting program phase behavior to duplicate most recently and/or frequently accessed tag entries in a Tag Replication Buffer (TRB) to protect the information integrity of tag arrays in a processor cache. The reliability/effectiveness of microprocessor cache performance may be further improved by capturing/duplicating tags of dirty cache lines, exploiting the fact that detected error-corrupted clean cache lines can be recovered by L2 cache. A deterministic TRB replacement triggered early write-back scheme may provide full duplication and recovery of single-bit errors for tags of dirty cache lines.Type: GrantFiled: June 29, 2011Date of Patent: September 16, 2014Assignee: New Jersey Institute of TechnologyInventors: Jie Hu, Shuai Wang
-
Patent number: 8762641Abstract: A method is described for use when a cache is accessed. Before all valid array entries are validated, a valid array entry is read when a data array entry is accessed. If the valid array entry is a first array value, access to the cache is treated as being invalid and the data array entry is reloaded. If the valid array entry is a second array value, a tag array entry is compared with an address to determine if the data array entry is valid or invalid. A valid control register contains a first control value before all valid array entries are validated and a second control value after all valid array entries are validated. After the second control value is established, reads of the valid array are disabled and the tag array entry is compared with the address to determine if a data array entry is valid or invalid.Type: GrantFiled: March 12, 2009Date of Patent: June 24, 2014Assignee: Qualcomm IncorporatedInventor: Arthur Joseph Hoane, Jr.
-
Patent number: 8700858Abstract: A method and system to allow power fail-safe write-back or write-through caching of data in a persistent storage device into one or more cache lines of a caching device. No metadata associated with any of the cache lines is written atomically into the caching device when the data in the storage device is cached. As such, specialized cache hardware to allow atomic writing of metadata during the caching of data is not required.Type: GrantFiled: May 16, 2012Date of Patent: April 15, 2014Assignee: Intel CorporationInventor: Sanjeev N. Trika
-
Patent number: 8694728Abstract: Miss rate curves are constructed in a resource-efficient manner so that they can be constructed and memory management decisions can be made while the workloads are running. The resource-efficient technique includes the steps of selecting a subset of memory pages for the workload, maintaining a least recently used (LRU) data structure for the selected memory pages, detecting accesses to the selected memory pages and updating the LRU data structure in response to the detected accesses, and generating data for constructing a miss-rate curve for the workload using the LRU data structure. After a memory page is accessed, the memory page may be left untraced for a period of time, after which the memory page is retraced.Type: GrantFiled: November 9, 2010Date of Patent: April 8, 2014Assignee: VMware, Inc.Inventors: Carl A. Waldspurger, Rajesh Venkatasubramanian, Alexander Thomas Garthwaite, Yury Baskakov, Puneet Zaroo
-
Patent number: 8677050Abstract: According to one aspect of the present disclosure, a method and technique for using processor registers for extending a cache structure is disclosed. The method includes identifying a register of a processor, identifying a cache to extend, allocating the register as an extension of the cache, and setting an address of the register as corresponding to an address space in the cache.Type: GrantFiled: November 12, 2010Date of Patent: March 18, 2014Assignee: International Business Machines CorporationInventors: Wen-Tzer T. Chen, Diane G. Flemming, William A. Maron, Mysore S. Srinivas, David B. Whitworth
-
Patent number: 8484411Abstract: A method and system for accessing a dynamic random access memory (DRAM) is provided. A memory controller includes a content addressable memory (CAM) based decision control module for determining a next best access request for the DRAM. The CAM based decision control module includes a CAM access storage module for storing access requests, a next access table module for storing the next best access request, and a decision logic module for determining the next best access request based on results from the CAM access storage module and the next access table module. Further, the memory controller includes a DRAM access control interface for implementing signaling required to access the DRAM. The method includes storing access requests in a CAM access storage module. The method includes determining which of the stored access requests is a next best access request. Further, the method includes processing the next best access request.Type: GrantFiled: December 31, 2008Date of Patent: July 9, 2013Assignee: Synopsys Inc.Inventors: Raghavan Menon, Raj Mahajan
-
Publication number: 20130097387Abstract: Aspects of various embodiments are directed to memory circuits, such as cache memory circuits. In accordance with one or more embodiments, cache-access to data blocks in memory is controlled as follows. In response to a cache miss for a data block having an associated address on a memory access path, data is fetched for storage in the cache (and serving the request), while one or more additional lookups are executed to identify candidate locations to store data. An existing set of data is moved from a target location in the cache to one of the candidate locations, and the address of the one of the candidate locations is associated with the existing set of data. Data in this candidate location may, for example, thus be evicted. The fetched data is stored in the target location and the address of the target location is associated with the fetched data.Type: ApplicationFiled: October 15, 2012Publication date: April 18, 2013Applicant: The Board of Trustees of the Leland Stanford Junior UniversityInventor: The Board of Trustees of the Leland Stanford Juni
-
Publication number: 20130031309Abstract: A cache memory associated with a main memory and a processor capable of executing a dataflow processing task, includes a plurality of disjoint storage segments, each associated with a distinct data category. A first segment is dedicated to input data originating from a dataflow consumed by the processing task. A second segment is dedicated to output data originating from a dataflow produced by the processing task. A third segment is dedicated to global constants, corresponding to data available in a single memory location to multiple instances of the processing task.Type: ApplicationFiled: October 5, 2012Publication date: January 31, 2013Applicant: Commissariat a l'Energie Atomique et aux Energies AlternativesInventor: Commissariat a l'Energie Atomique et aux Energie
-
Patent number: 8341353Abstract: A system and method to access data from a portion of a level two memory or from a level one memory is disclosed. In a particular embodiment, the system includes a level one cache and a level two memory. A first portion of the level two memory is coupled to an input port and is addressable in parallel with the level one cache.Type: GrantFiled: January 14, 2010Date of Patent: December 25, 2012Assignee: QUALCOMM IncorporatedInventors: Suresh K. Venkumahanti, Christopher Edward Koob, Lucian Codrescu
-
Patent number: 8250305Abstract: Systems, methods and computer program products for data buffers partitioned from a cache array. An exemplary embodiment includes a method in a processor and for providing data buffers partitioned from a cache array, the method including clearing cache directories associated with the processor to an initial state, obtaining a selected directory state from a control register preloaded by the service processor, in response to the control register including the desired cache state, sending load commands with an address and data, loading cache lines and cache line directory entries into the cache and storing the specified data in the corresponding cache line.Type: GrantFiled: March 19, 2008Date of Patent: August 21, 2012Assignee: International Business Machines CorporationInventors: Gary E. Strait, Deanna P. Dunn, Michael F. Fee, Pak-kin Mak, Robert J. Sonnelitter, III
-
Patent number: 8219755Abstract: In one embodiment, a cache comprises a tag memory and a comparator. The tag memory is configured to store tags of cache blocks stored in the cache, and is configured to output at least one tag responsive to an index corresponding to an input address. The comparator is coupled to receive the tag and a tag portion of the input address, and is configured to compare the tag to the tag portion to generate a hit/miss indication. The comparator comprises dynamic circuitry, and is coupled to receive a control signal which, when asserted, is defined to force a first result on the hit/miss indication independent of whether or not the tag portion matches the tag. The comparator also comprises circuitry coupled to receive the control signal and configured to inhibit a state change on an output of the dynamic circuitry during an evaluate phase of the dynamic circuitry to produce the first result responsive to an assertion of the control signal.Type: GrantFiled: August 1, 2011Date of Patent: July 10, 2012Assignee: Apple Inc.Inventor: Brian J. Campbell
-
Patent number: 8195891Abstract: A method and system to allow power fail-safe write-back or write-through caching of data in a persistent storage device into one or more cache lines of a caching device. No metadata associated with any of the cache lines is written atomically into the caching device when the data in the storage device is cached. As such, specialized cache hardware to allow atomic writing of metadata during the caching of data is not required.Type: GrantFiled: March 30, 2009Date of Patent: June 5, 2012Assignee: Intel CorporationInventor: Sanjeev N. Trika
-
Patent number: 8015356Abstract: In one embodiment, a cache comprises a tag memory and a comparator. The tag memory is configured to store tags of cache blocks stored in the cache, and is configured to output at least one tag responsive to an index corresponding to an input address. The comparator is coupled to receive the tag and a tag portion of the input address, and is configured to compare the tag to the tag portion to generate a hit/miss indication. The comparator comprises dynamic circuitry, and is coupled to receive a control signal which, when asserted, is defined to force a first result on the hit/miss indication independent of whether or not the tag portion matches the tag. The comparator also comprises circuitry coupled to receive the control signal and configured to inhibit a state change on an output of the dynamic circuitry during an evaluate phase of the dynamic circuitry to produce the first result responsive to an assertion of the control signal.Type: GrantFiled: July 1, 2005Date of Patent: September 6, 2011Assignee: Apple Inc.Inventor: Brian J. Campbell
-
Patent number: 7930479Abstract: A system and method for caching and retrieving from cache transaction content elements. Metadata is stored in cache to describe content elements of a transaction, a data retrieval device determines, based on the metadata, whether cache contains a complete copy of a transaction associated with a requested content element, and the data retrieval device returns the requested content element from cache if the complete copy of the associated transaction is in cache.Type: GrantFiled: April 29, 2004Date of Patent: April 19, 2011Assignee: SAP AGInventor: Noam Barda
-
Patent number: 7711902Abstract: A memory system is provided comprising a memory controller, a level 1 (L1) cache including L1 tag memory and L1 data memory, a level 2 (L2) cache coupled to the L1 cache, the L2 cache including L2 tag memory having a plurality of L2 tag entries and a L2 data memory having a plurality of L2 data entries. The L2 tag entries are more than the L2 data entries. In response to receiving a tag and an associated data, if L2 tag entries having corresponding L2 data entries are unavailable and if a first tag in a first L2 tag entry with an associated first data in a first L2 data entry has a more recent or duplicate value of the first data in the L1 data memory, the memory controller moves the first tag to a second L2 tag entry that does not have a corresponding L2 data entry, vacates the first L2 tag entry and the first L2 data entry and stores the received tag in the first L2 tag entry and the received data in the first L2 data entry.Type: GrantFiled: April 7, 2006Date of Patent: May 4, 2010Assignee: Broadcom CorporationInventor: Fong Pong
-
Patent number: 7698509Abstract: A multiprocessing node has a plurality of point-to-point connected microprocessors. Each of the microprocessors is also point-to-point connected to a filter. In response to a local cache miss, a microprocessor issues a broadcast for the requested data to the filter. The filter, using memory that stores a copy of the tags of data stored in the local cache memories of each of the microprocessors, relays the broadcast to those/microprocessors having copies of the requested data. If the snoop filter memory indicates that none of the microprocessors have a copy of the requested data, the snoop filter may either (i) cancel the broadcast and issue a message back to the requesting microprocessor, or (ii) relay the broadcast to a connected multiprocessing node.Type: GrantFiled: July 13, 2004Date of Patent: April 13, 2010Assignee: Oracle America, Inc.Inventors: Michael J. Koster, Christopher L. Johnson, Brian W. O'Krafka
-
Publication number: 20090198901Abstract: A computer system includes a main memory for storing a large amount of data, a cache memory that can be accessed at a higher speed than the main memory, a memory replacement controller for controlling the replacement of data between the main memory and the cache memory, and a memory controller capable of allocating one or more divided portions of the cache memory to each process unit. The memory replacement controller stores priority information for each process unit, and replaces lines of the cache memory based on a replacement algorithm taking the priority information into consideration, wherein the divided portions of the cache memory are allocated so that the storage area is partially shared between process units, after which the allocated amounts of cache memory are changed automatically.Type: ApplicationFiled: October 8, 2008Publication date: August 6, 2009Inventor: Yoshihiro Koga
-
Publication number: 20090172449Abstract: Disclosed herein are approaches to reducing a guardband (margin) used for minimum voltage supply (Vcc) requirements for memory such as cache.Type: ApplicationFiled: December 26, 2007Publication date: July 2, 2009Inventors: Ming Zhang, Chris Wilkerson, Greg Taylor, Randy J. Aksamlt, James Tschanz
-
Publication number: 20090070532Abstract: A system and method for using a single test case to test each sector within multiple congruence classes is presented. A test case generator builds a test case for accessing each sector within a congruence class. Since a congruence class spans multiple congruence pages, the test case generator builds the test case over multiple congruence pages in order for the test case to test the entire congruence class. During design verification and validation, a test case executor modifies a congruence class identifier (e.g., patches a base register), which forces the test case to test a specific congruence class. By incrementing the congruence class identifier after each execution of the test case, the test case executor is able to test each congruence class in the cache using a single test case.Type: ApplicationFiled: September 11, 2007Publication date: March 12, 2009Inventors: Vinod Bussa, Shubhodeep Roy Choudhury, Manoj Dusanapudi, Sunil Suresh Hatti, Shakti Kapoor, Batchu Naga Venkata Satyanarayana
-
Publication number: 20080275850Abstract: An appropriate tag is assigned to an image in comparatively simple fashion. An image of interest to be tagged is selected and tags that have already been assigned to the selected image of interest are displayed in a present-tag display area. Tags having a high frequency of appearance are extracted from among tags that have been assigned to images having tags identical with the tags that have already been assigned to the image of interest, these images being taken from among images that have been stored in an image database. The extracted tags are displayed in a tag candidate display area as candidate tags. Since the tags displayed in the tag candidate display area often are tags related to the selected image of interest, they are tags suitable for assignment to the image of interest.Type: ApplicationFiled: March 13, 2008Publication date: November 6, 2008Inventor: Arito ASAI
-
Publication number: 20080133843Abstract: In one embodiment, a cache comprises a data memory comprising a plurality of data entries, each data entry having capacity to store a cache block of data, and a cache control unit coupled to the data memory. The cache control unit is configured to dynamically allocate a given data entry in the data memory to store a cache block being cached or to store data that is not being cache but is being staged for retransmission on an interface to which the cache is coupled.Type: ApplicationFiled: November 30, 2006Publication date: June 5, 2008Inventors: Ruchi Wadhawan, Jason M. Kassoff, George Kong Yiu
-
Publication number: 20080133834Abstract: A method for handling a request of storage on a serial fabric comprising formatting an address for communication on a serial fabric into a plurality of fields including a field comprising at least one set selection bit and a field comprising at least one tag bit. The address is communicated on the serial fabric with the field comprising the at least one set selection bit communicated first.Type: ApplicationFiled: December 5, 2006Publication date: June 5, 2008Inventors: Blaine D. Gaither, Verna Knapp