Using Clearing, Invalidating, Or Resetting Means (epo) Patents (Class 711/E12.022)
  • Publication number: 20100250856
    Abstract: A system and method for data allocation in a shared cache memory of a computing system are contemplated. Each cache way of a shared set-associative cache is accessible to multiple sources, such as one or more processor cores, a graphics processing unit (GPU), an input/output (I/O) device, or multiple different software threads. A shared cache controller enables or disables access separately to each of the cache ways based upon the corresponding source of a received memory request. One or more configuration and status registers (CSRs) store encoded values used to alter accessibility to each of the shared cache ways. The control of the accessibility of the shared cache ways via altering stored values in the CSRs may be used to create a pseudo-RAM structure within the shared cache and to progressively reduce the size of the shared cache during a power-down sequence while the shared cache continues operation.
    Type: Application
    Filed: March 27, 2009
    Publication date: September 30, 2010
    Inventors: Jonathan Owen, Guhan Krishnan, Carl D. Dietz, Douglas Richard Beard, William K. Lewchuk, Alexander Branover
  • Publication number: 20100250830
    Abstract: A system, method, and computer program product are provided for hardening data stored on a solid state disk. In operation, it is determined whether a solid state disk is to be powered off. Furthermore, data stored on the solid state disk is hardened if it is determined that the solid state disk is to be powered off.
    Type: Application
    Filed: March 27, 2009
    Publication date: September 30, 2010
    Inventor: Ross John Stenfort
  • Publication number: 20100250855
    Abstract: A computer-readable recording medium storing a data storage program, a method and a computer are provided. The computer includes a cache table including an address area for storing an address and a user data area for storing user data corresponding to the address, and executes an operation including, reading user data at a specified address from a recording medium, delta-decoding the read difference data, and determining the decompressed user data to be the read user data, and writing the read user data in the user data area of the cache table when a size of the user data read by the delta-decoding is equal to or less than a threshold value and writing an address corresponding to the read user data in the address area of the cache table, obtaining difference data between the user data requested to be written and the corresponding user data and writing the difference data.
    Type: Application
    Filed: November 25, 2009
    Publication date: September 30, 2010
    Applicant: FUJITSU LIMITED
    Inventors: Takashi Watanabe, Yasuo Noguchi, Kazutaka Ogihara, Masahisa Tamura, Yoshihiro Tsuchiya, Tetsutaro Maruyama, Tatsuo Kumano
  • Publication number: 20100250833
    Abstract: A method and system to allow power fail-safe write-back or write-through caching of data in a persistent storage device into one or more cache lines of a caching device. No metadata associated with any of the cache lines is written atomically into the caching device when the data in the storage device is cached. As such, specialized cache hardware to allow atomic writing of metadata during the caching of data is not required.
    Type: Application
    Filed: March 30, 2009
    Publication date: September 30, 2010
    Inventor: Sanjeev N. Trika
  • Publication number: 20100241812
    Abstract: Data from a shared memory (12) is processed with a plurality of processing units (11). Access to a data object is controlled by execution of acquire and release instructions for the data object, and wherein each processing unit (11) comprises a processor (10) and a cache circuit (14) for caching data from the shared memory (12). Instructions to access the data object in each processor (10) are executed only between completing execution of the acquire instruction for the data object, and execution of the release instruction for the data object in the processor (10). Execution of the acquire instruction is completed only upon detection that none of the processors (10) has previously executed an acquire instruction for the data object without subsequently completing execution of a release instruction for the data object.
    Type: Application
    Filed: October 14, 2008
    Publication date: September 23, 2010
    Applicant: NXP B.V.
    Inventor: Marco Jan Gerrit Bekoou
  • Publication number: 20100235577
    Abstract: A data processing system includes a plurality of processing units coupled by an interconnect fabric. In response to a data request, a victim cache line is selected for castout from a first lower level cache of a first processing unit, and a target lower level cache of one of the plurality of processing units is selected based upon architectural proximity of the target lower level cache to a home system memory to which the address of the victim cache line is assigned. The first processing unit issues on the interconnect fabric a lateral castout (LCO) command that identifies the victim cache line to be castout from the first lower level cache and indicates that the target lower level cache is an intended destination. In response to a coherence response indicating success of the LCO command, the victim cache line is removed from the first lower level cache and held in the second lower level cache.
    Type: Application
    Filed: December 19, 2008
    Publication date: September 16, 2010
    Applicant: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Michael S. Siegel, William J. Starke, Derek E. Williams
  • Publication number: 20100235579
    Abstract: A data processing apparatus, and method of managing at least one cache within such an apparatus, are provided. The data processing apparatus has at least one processing unit for executing a sequence of instructions, with each such processing unit having a cache associated therewith, each cache having a plurality of cache lines for storing data values for access by the associated processing unit when executing the sequence of instructions. Identification logic is provided which, for each cache, monitors data traffic within the data processing apparatus and based thereon generates a preferred for eviction identification identifying one or more of the data values as preferred for eviction. Cache maintenance logic is then arranged, for each cache, to implement a cache maintenance operation during which selection of one or more data values for eviction from that cache is performed having regard to any preferred for eviction identification generated by the identification logic for data values stored in that cache.
    Type: Application
    Filed: September 18, 2006
    Publication date: September 16, 2010
    Inventors: Stuart David Biles, Richard Roy Grisenthwaite, David Hennah Mansell
  • Publication number: 20100235585
    Abstract: System(s) and method(s) are provided for caching data in a consolidated network repository of information available to mobile and non-mobile networks, and network management systems. Data can be cached in response to request(s) for a data element or request(s) for an update to a data element and in accordance with a cache retention protocol that establishes a versioning protocol and a set of timers that determine a period to elapse prior to removal of a version of the cached data element. Updates to a cached data element can be effected if an integrity assessment determines that recordation of an updated version of the data element preserves operational integrity of one or more network components or services. The assessment is based on integrity logic that establishes a set of rules that evaluate operational integrity of a requested update to a data element. Retention protocol and integrity logic are configurable.
    Type: Application
    Filed: October 30, 2009
    Publication date: September 16, 2010
    Applicant: AT&T MOBILITY II LLC
    Inventor: Sangar Dowlatkhah
  • Publication number: 20100235584
    Abstract: A victim cache line having a data-invalid coherence state is selected for castout from a first lower level cache of a first processing unit. The first processing unit issues on an interconnect fabric a lateral castout (LCO) command identifying the victim cache line to be castout from the first lower level cache, indicating the data-invalid coherence state, and indicating that a lower level cache is an intended destination of the victim cache line. In response to a coherence response to the LCO command indicating success of the LCO command, the victim cache line is removed from the first lower level cache and held in a second lower level cache of a second processing unit in the data-invalid coherence state.
    Type: Application
    Filed: March 11, 2009
    Publication date: September 16, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Guy L. Guthrie, Hien M. Le, Alvan W. Ng, Michael S. Siegel, Derek E. Williams, Phillip G. Williams
  • Publication number: 20100235576
    Abstract: A victim cache memory includes a cache array, a cache directory of contents of the cache array, and a cache controller that controls operation of the victim cache memory. The cache controller, responsive to receiving a castout command identifying a victim cache line castout from another cache memory, causes the victim cache line to be held in the cache array. If the other cache memory is a higher level cache in the cache hierarchy of the processor core, the cache controller marks the victim cache line in the cache directory so that it is less likely to be evicted by a replacement policy of the victim cache, and otherwise, marks the victim cache line in the cache directory so that it is more likely to be evicted by the replacement policy of the victim cache.
    Type: Application
    Filed: December 16, 2008
    Publication date: September 16, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Guy L. Guthrie, Alvan W. Ng, Michael S. Siegel, William J. Starke, Derek E. Williams, Phillip G. Williams
  • Publication number: 20100228922
    Abstract: A method and system to provide a method and system to perform background evictions of cache memory lines. In one embodiment of the invention, when a processor of a system determines that the occupancy rate of its bus interface is between a low and a high threshold, the processor performs evictions of cache memory lines that are dirty. In another embodiment of the invention, the processor performs evictions of the dirty cache memory lines when a timer between each periodic clock interrupt of an operating system has expired. By performing background evictions of dirty cache memory lines, the number of dirty cache memory lines required to be evicted before the processor changes its state from a high power state to a low power state is reduced.
    Type: Application
    Filed: March 9, 2009
    Publication date: September 9, 2010
    Inventor: Deepak Limaye
  • Publication number: 20100228921
    Abstract: A system and method for cache hit management.
    Type: Application
    Filed: March 4, 2009
    Publication date: September 9, 2010
    Inventors: ADI GROSSMAN, Omri Shacham
  • Publication number: 20100211744
    Abstract: Efficient techniques are described for tracking a potential invalidation of a data cache entry in a data cache for which coherency is required. Coherency information is received that indicates a potential invalidation of a data cache entry. The coherency information in association with the data cache entry is retained to track the potential invalidation to the data cache entry. The retained coherency information is kept separate from state bits that are utilized in cache access operations. An invalidate bit, associated with a data cache entry, may be utilized to represents a potential invalidation of the data cache entry. The invalidate bit is set in response to the coherency information, to track the potential invalidation of the data cache entry. A valid bit associated with the data cache entry is set in response to the active invalidate bit and a memory synchronization command. The set invalidate bit is cleared after the valid bit has been cleared.
    Type: Application
    Filed: February 19, 2009
    Publication date: August 19, 2010
    Applicant: QUALCOMM INCORPORATED
    Inventors: Michael W. Morrow, James Norris Dieffenderfer
  • Publication number: 20100205367
    Abstract: A non-volatile memory location in a disk drive is utilized to store data residing in a write-cache upon receiving a flush-cache command from a host computer. If a subsequent flush-cache command is not issued within a predetermined time period, any data residing in the write-cache and stored in the non-volatile memory location that has not yet been written to its correct location on disk will be written to its correct location on disk.
    Type: Application
    Filed: February 9, 2009
    Publication date: August 12, 2010
    Inventors: Richard M. Ehrlich, Andre Hall
  • Publication number: 20100205602
    Abstract: A thread scheduling mechanism is provided that flexibly enforces performance isolation of multiple threads to alleviate the effect of anti-cooperative execution behavior with respect to a shared resource, for example, hoarding a cache or pipeline, using the hardware capabilities of simultaneous multi-threaded (SMT) or multi-core processors. Given a plurality of threads running on at least two processors in at least one functional processor group, the occurrence of a rescheduling condition indicating anti-cooperative execution behavior is sensed, and, if present, at least one of the threads is rescheduled such that the first and second threads no longer execute in the same functional processor group at the same time.
    Type: Application
    Filed: April 26, 2010
    Publication date: August 12, 2010
    Applicant: VMWARE, INC.
    Inventors: John R. ZEDLEWSKI, Carl A. WALDSPURGER
  • Publication number: 20100191916
    Abstract: A method and a system for utilizing less recently used (LRU) bits and presence bits in selecting cache-lines for eviction from a lower level cache in a processor-memory sub-system. A cache back invalidation (CBI) logic utilizes LRU bits to evict only cache-lines within a LRU group, following a cache miss in the lower level cache. In addition, the CBI logic uses presence bits to (a) indicate whether a cache-line in a lower level cache is also present in a higher level cache and (b) evict only cache-lines in the lower level cache that are not present in a corresponding higher level cache. However, when the lower level cache-line selected for eviction is also present in any higher level cache, CBI logic invalidates the cache-line in the higher level cache. The CBI logic appropriately updates the values of presence bits and LRU bits, following evictions and invalidations.
    Type: Application
    Filed: January 23, 2009
    Publication date: July 29, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ganesh Balakrishnan, Anil Krishna
  • Publication number: 20100191917
    Abstract: Administering registered virtual addresses in a hybrid computing environment that includes a host computer and an accelerator, the accelerator architecture optimized, with respect to the host computer architecture, for speed of execution of a particular class of computing functions, the host computer and the accelerator adapted to one another for data communications by a system level message passing module, where administering registered virtual addresses includes maintaining, by an operating system, a watch list of ranges of currently registered virtual addresses; upon a change in physical to virtual address mappings of a particular range of virtual addresses falling within the ranges included in the watch list, notifying the system level message passing module by the operating system of the change; and updating, by the system level message passing module, a cache of ranges of currently registered virtual addresses to reflect the change in physical to virtual address mappings.
    Type: Application
    Filed: January 23, 2009
    Publication date: July 29, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Charles J. Archer, Gary R. Ricard
  • Publication number: 20100185816
    Abstract: A mechanism which allows pages of flash memory to be read directly into cache. The mechanism enables different cache line sizes for different cache levels in a cache hierarchy, and optionally, multiple line size support, simultaneously or as an initialization option, in the highest level (largest/slowest) cache. Such a mechanism improves performance and reduces cost for some applications.
    Type: Application
    Filed: January 21, 2009
    Publication date: July 22, 2010
    Inventors: William F. Sauber, Mitchell Markow
  • Publication number: 20100185820
    Abstract: A data processing device is disclosed that includes multiple processing cores, where each core is associated with a corresponding cache. When a processing core is placed into a first sleep mode, the data processing device initiates a first phase. If any cache probes are received at the processing core during the first phase, the cache probes are serviced. At the end of the first phase, the cache corresponding to the processing core is flushed, and subsequent cache probes are not serviced at the cache. Because it does not service the subsequent cache probes, the processing core can therefore enter another sleep mode, allowing the data processing device to conserve additional power.
    Type: Application
    Filed: January 21, 2009
    Publication date: July 22, 2010
    Applicant: ADVANCED MICRO DEVICES, INC.
    Inventors: William A. Hughes, Kiran K. Bondalapati, Vydhyanathan Kalyanasundharam, Kevin M. Lepak, Benjamin T. Sander
  • Publication number: 20100185819
    Abstract: A first cache simultaneously broadcasts, in a single message, a request for a cache line and a request to accept a future related evicted cache line to multiple other caches. Each of the multiple other caches evaluate their occupancy to derive an occupancy value that reflects their ability to accept the future related evicted cache line. In response to receiving a requested cache line, the first cache evicts the related evicted cache line to the cache with the highest occupancy value.
    Type: Application
    Filed: January 16, 2009
    Publication date: July 22, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: TIMOTHY H. HEIL, RUSSELL D. HOOVER, CHARLES L. JOHNSON, STEVEN P. VANDERWIEL
  • Publication number: 20100180081
    Abstract: A data processing system includes a processor, a unit that includes a multi-level cache, a prefetch system and a memory. The data processing system can operate in a first mode and a second mode. The prefetch system can change behavior in response to a desired power consumption policy set by an external agent or automatically via hardware based on on-chip power/performance thresholds.
    Type: Application
    Filed: January 15, 2009
    Publication date: July 15, 2010
    Inventors: Pradip Bose, Alper Buyuktosunoglu, Miles Robert Dooley, Michael Stephen Floyd, David Scott Ray, Bruce Joseph Ronchetti
  • Publication number: 20100179865
    Abstract: Music can be broadcast from a radio station and recorded onto a cache of a personal electronic device, such as a portable digital music player. The recording can occur such that there is segmenting of music into different cache portions based upon classification. Instead of playing music from the radio station, music can be played from the cache to ensure high quality and desirable variety. Different rules can be used to govern which music is played as well as how music should be removed from the cache. In addition, targeted advertisements can be used that relate to the music in the cache as well as a user location.
    Type: Application
    Filed: January 9, 2009
    Publication date: July 15, 2010
    Applicant: QUALCOMM Incorporated
    Inventors: Patrik N. Lundqvist, Guilherme K. Hoefel, Robert S. Daley, Jack B. Steenstra
  • Publication number: 20100180084
    Abstract: A new “held” (“H”) cache-coherency state is introduced for directory-based multiprocessor systems. Using the held state enables embodiments of the present invention to track sharers that have a shared copy of a cache line after a directory runs out of space for holding information that identifies processors that have received shared copies of the cache line (e.g., pointers to sharers of the cache line). In these embodiments, when a directory entry is full, the system provides subsequent shared copies of the cache line to sharers in the held state and tracks the identity of the held-copy owners in a data field in the entry for the cache line in a home node.
    Type: Application
    Filed: January 13, 2009
    Publication date: July 15, 2010
    Applicant: SUN MICROSYSTEMS, INC.
    Inventor: Robert E. Cypher
  • Publication number: 20100180083
    Abstract: A cache memory having enhanced performance and security feature is provided. The cache memory includes a data array storing a plurality of data elements, a tag array storing a plurality of tags corresponding to the plurality of data elements, and an address decoder which permits dynamic memory-to-cache mapping to provide enhanced security of the data elements, as well as enhanced performance. The address decoder receives a context identifier and a plurality of index bits of an address passed to the cache memory, and determines whether a matching value in a line number register exists. The line number registers allow for dynamic memory-to-cache mapping, and their contents can be modified as desired. Methods for accessing and replacing data in a cache memory are also provided, wherein a plurality of index bits and a plurality of tag bits at the cache memory are received.
    Type: Application
    Filed: December 8, 2009
    Publication date: July 15, 2010
    Inventors: Ruby B. Lee, Zhenghong Wang
  • Publication number: 20100174854
    Abstract: A method and system for extending the life span of a flash memory device. The flash memory device is dynamically configurable to store data in the single bit per cell (SBC) storage mode or the multiple bit per cell (MBC) mode, such that both SBC data and MBC data co-exist within the same memory array. One or more tag bits stored in each page of the memory is used to indicate the type of storage mode used for storing the data in the corresponding subdivision, where a subdivision can be a bank, block or page. A controller monitors the number of program-erase cycles corresponding to each page for selectively changing the storage mode in order to maximize lifespan of any subdivision of the multi-mode flash memory device.
    Type: Application
    Filed: December 10, 2009
    Publication date: July 8, 2010
    Applicant: MOSAID TECHNOLOGIES INCORPORATED
    Inventor: Jin-Ki KIM
  • Publication number: 20100161905
    Abstract: In one embodiment, a system comprises a plurality of agents coupled to an interconnect and a cache coupled to the interconnect. The plurality of agents are configured to cache data. A first agent of the plurality of agents is configured to initiate a transaction on the interconnect by transmitting a memory request, and other agents of the plurality of agents are configured to snoop the memory request from the interconnect. The other agents provide a response in a response phase of the transaction on the interconnect. The cache is configured to detect a hit for the memory request and to provide data for the transaction to the first agent prior to the response phase and independent of the response.
    Type: Application
    Filed: March 1, 2010
    Publication date: June 24, 2010
    Inventors: Brian P. Lilly, Sridhar P. Subramanian, Ramesh Gunna
  • Publication number: 20100153651
    Abstract: Memory is used, including by receiving at a processor an indication that a first piece of metadata associated with a set of backup data is required during a block based backup and/or restore. The processor is used to retrieve from a metadata store a set of metadata that includes the first piece of metadata and one or more additional pieces of metadata included in the metadata store in an adjacent location that is adjacent to a first location in which the first piece of metadata is stored in the metadata store, without first determining whether the one or more additional pieces of metadata are currently required. The retrieved set of metadata is stored in a cache.
    Type: Application
    Filed: February 8, 2010
    Publication date: June 17, 2010
    Inventor: Ajay Pratap Singh Kushwah
  • Publication number: 20100153652
    Abstract: Embodiments disclosed herein provide a cache management system comprising a cache and a cache manager that can poll cached assets at different frequencies based on their relative activity status and independent of other applications. In one embodiment, the cache manager may maintain one or more lists, each corresponding to a polling layer associated with a particular polling schedule or frequency. Cached assets may be added to or removed from a list or they may be promoted or demoted to a different list, thereby changing their polling frequency. By polling less active files at a lower frequency than more active files, significant system resources can be saved, thereby increasing overall system speed and performance. Additionally, because a cache manager according to embodiments disclosed herein does not require detailed contextual information about the files that it is managing, such a cache manager can be easily implemented with any cache.
    Type: Application
    Filed: December 9, 2009
    Publication date: June 17, 2010
    Applicant: Vignette Corporation
    Inventors: David Thomas, Scott Wells
  • Publication number: 20100153650
    Abstract: A cache memory includes a cache array including a plurality of congruence classes each containing a plurality of cache lines, where each cache line belongs to one of multiple classes which include at least a first class and a second class. The cache memory also includes a cache directory of the cache array that indicates class membership. The cache memory further includes a cache controller that selects a victim cache line for eviction from a congruence class. If the congruence class contains a cache line belonging to the second class, the cache controller preferentially selects as the victim cache line a cache line of the congruence class belonging to the second class based upon access order. If the congruence class contains no cache line belonging to the second class, the cache controller selects as the victim cache line a cache line belonging to the first class based upon access order.
    Type: Application
    Filed: December 16, 2008
    Publication date: June 17, 2010
    Applicant: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Thomas L. Jeremiah, William L. McNeil, Piyush C. Patel, William J. Starke, Jeffrey A. Stuecheli
  • Publication number: 20100146214
    Abstract: Systems and methods for the implementation of more efficient cache locking mechanisms are disclosed. These systems and methods may alleviate the need to present both a virtual address (VA) and a physical address (PA) to a cache mechanism. A translation table is utilized to store both the address and the locking information associated with a virtual address, and this locking information is passed to the cache along with the address of the data. The cache can then lock data based on this information. Additionally, this locking information may be used to override the replacement mechanism used with the cache, thus keeping locked data in the cache. The translation table may also store translation table lock information such that entries in the translation table are locked as well.
    Type: Application
    Filed: February 18, 2010
    Publication date: June 10, 2010
    Inventors: Takeki Osanai, Kimberly Fernsler
  • Publication number: 20100138613
    Abstract: The invention relates to a method for improving caching efficiency in a computing device. It utilises metadata, that describes attributes of the data to which it relates, to determine an appropriate caching strategy for the data. The caching strategy may be based on the type of the data, and/or on the expected access of the data.
    Type: Application
    Filed: June 22, 2009
    Publication date: June 3, 2010
    Applicant: Nokia Corporation
    Inventor: Jason Parker
  • Patent number: 7725661
    Abstract: Management of a Cache is provided by differentiating data base on attributes associated with the data and reducing storage bottlenecks. The Cache differentiates and manages data using a state machine with a plurality of states. The Cache may use data patterns and statistics to retain frequently used data in the cache longer. The Cache uses content or attributes to differentiate and retain data longer. Further, the Cache may provide status and statistics to a data flow manager that determines which data to cache and which data to pipe directly through, or to switch cache policies dynamically, thus avoiding some of the cache overhead. The Cache may also place clean and dirty data in separate states to enable more efficient Cache mirroring and flush.
    Type: Grant
    Filed: March 25, 2008
    Date of Patent: May 25, 2010
    Assignee: Plurata Technologies, Inc.
    Inventors: Wei Liu, Steven H. Kahle
  • Publication number: 20100125708
    Abstract: A recursive logical partition real memory map mechanism is provided for use in address translation. The mechanism, which is provided in a data processing system, receives a first address based on an address submitted from a process of a currently active logical partition. The first address is translated into a second address using a recursive logical partition real memory (RLPRM) map data structure for the currently active logical partition. The memory is accessed using the second address. The RLPRM map data structure provides a plurality of translation table pointers, each translation table pointer pointing to a separate page table for a separate level of virtualization in the data processing system with the data processing system supporting multiple levels of virtualization.
    Type: Application
    Filed: November 17, 2008
    Publication date: May 20, 2010
    Applicant: International Business Machines Corporation
    Inventors: William E. Hall, Guerney D.H. Hunt, Paul A. Karger, Mark F. Mergen, David R. Safford
  • Publication number: 20100122013
    Abstract: A data structure for enforcing consistent per-physical page cacheability attributes is disclosed. The data structure is used with a method for enforcing consistent per-physical page cacheability attributes, which maintains memory coherency within a processor addressing memory, such as by comparing a desired cacheability attribute of a physical page address in a PTE against an authoritative table that indicates the current cacheability status. This comparison can be made at the time the PTE is inserted into a TLB. When the comparison detects a mismatch between the desired cacheability attribute of the page and the page's current cacheability status, corrective action can be taken to transition the page into the desired cacheability state.
    Type: Application
    Filed: January 15, 2010
    Publication date: May 13, 2010
    Inventors: Alexander C. Klaiber, David Dunn
  • Publication number: 20100122035
    Abstract: A spiral cache memory provides reduction in access latency for frequently-accessed values by self-organizing to always move a requested value to a front-most central storage element of the spiral. The occupant of the central location is swapped backward, which continues backward through the spiral until an empty location is swapped-to, or the last displaced value is cast out of the last location in the spiral. The elements in the spiral may be cache memories or single elements. The resulting cache memory is self-organizing and for the one-dimensional implementation has a worst-case access time proportional to N, where N is the number of tiles in the spiral. A k-dimensional spiral cache has a worst-case access time proportional to N1/k. Further, a spiral cache system provides a basis for a non-inclusive system of cache memory, which reduces the amount of space and power consumed by a cache memory of a given size.
    Type: Application
    Filed: November 13, 2008
    Publication date: May 13, 2010
    Applicant: International Business Machines Corporation
    Inventors: Volker Strumpen, Matteo Frigo
  • Publication number: 20100115205
    Abstract: A system, method, and computer-readable medium that facilitate efficient use of cache memory in a massively parallel processing system are provided. A residency time of a data block to be stored in cache memory or a disk drive is estimated. A metric is calculated for the data block as a function of the residency time. The metric may further be calculated as a function of the data block size. One or more data blocks stored in cache memory are evaluated by comparing a respective metric of the one or more data blocks with the metric of the data block to be stored. A determination is then made to either store the data block on the disk drive or flush the one or more data blocks from the cache memory and store the data block in the cache memory. In this manner, the cache memory may be more efficiently utilized by storing smaller data blocks with lesser residency times by flushing larger data blocks with significant residency times from the cache memory.
    Type: Application
    Filed: November 3, 2008
    Publication date: May 6, 2010
    Inventors: Douglas Brown, John Mark Morris
  • Patent number: 7702855
    Abstract: A processing device employs a stack memory in a region of an external memory. The processing device has a stack pointer register to store a current top address for the stack memory. One of several techniques is used to determine which portion or portions of the external memory correspond to the stack region. A more efficient memory policy is implemented, whereby pushes to the stack do not have to read data from the external memory in to a cache, and whereby pops from the stack do not cause stale stack data to be written back from the cache to the external memory.
    Type: Grant
    Filed: August 11, 2005
    Date of Patent: April 20, 2010
    Assignee: Cisco Technology, Inc.
    Inventors: Jonathan Rosen, Earl T. Cohen
  • Publication number: 20100088472
    Abstract: A data processing system is provided. The data processing system includes a plurality of processors, a cache memory shared by the plurality of processors, in which memory a cache line is divided into a plurality of partial writable regions. The plurality of processors are given exclusive access rights to the partial writable region waits.
    Type: Application
    Filed: December 8, 2009
    Publication date: April 8, 2010
    Applicant: Fujitsu Limited
    Inventor: Masaki Ukai
  • Publication number: 20100082907
    Abstract: The present invention provides a system for and a method of data cache management. In accordance with an embodiment, of the present invention, a method of cache management is provided. A request for access to data is received. A sample value is assigned to the request, the sample value being randomly selected according to a probability distribution. The sample value is compared to another value. The data is selectively stored in the cache based on results of the comparison.
    Type: Application
    Filed: October 1, 2008
    Publication date: April 1, 2010
    Inventors: Vinay Deolalikar, Kave Eshghi
  • Publication number: 20100082905
    Abstract: Methods and apparatus relating to disabling one or more cache portions during low voltage operations are described. In some embodiments, one or more extra bits may be used for a portion of a cache that indicate whether the portion of the cache is capable at operating at or below Vccmin levels. Other embodiments are also described and claimed.
    Type: Application
    Filed: September 30, 2008
    Publication date: April 1, 2010
    Inventors: Christopher Wilkerson, Muhammad M. Khellah, Vivek De, Ming Zhang, Jaume Abella, Javier Carretero Casado, Pedro Chaparro Monferrer, Xavier Vera, Antonio Gonzalez
  • Publication number: 20100077153
    Abstract: Computer implemented method, system and computer usable program code for cache management. A cache is provided, wherein the cache is viewed as a sorted array of data elements, wherein a top position of the array is a most recently used position of the array and a bottom position of the array is a least recently used position of the array. A memory access sequence is provided, and a training operation is performed with respect to a memory access of the memory access sequence to determine a type of memory access operation to be performed with respect to the memory access. Responsive to a result of the training operation, a cache replacement operation is performed using the determined memory access operation with respect to the memory access.
    Type: Application
    Filed: September 23, 2008
    Publication date: March 25, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Roch Georges Archambault, Shimin Cui, Chen Ding, Yaoqing Gao, Xiaoming Gu, Raul Esteban Silvera, Chengliang Zhang
  • Patent number: 7685384
    Abstract: A system and method for performing real-time replication of data across a network is provided. A mirroring engine receives a write request from a host application operating on a source computer. The mirroring engine compares data in the write request with corresponding data stored in memory. If data in the write request differs from stored data, the mirroring engine processes the write request. Processing involves computing a data signature across data in the write request and associating the signature with a transaction number and a status byte. The transaction number is used to uniquely identify the data signature and can be used to ensure that the signature is properly handled if it is received, for example, out of order. The status byte contains information used for handling the data signature and transaction number as well as information identifying how the data signature was computed.
    Type: Grant
    Filed: January 5, 2005
    Date of Patent: March 23, 2010
    Assignee: GlobalSCAPE, Inc.
    Inventor: Tsachi Chuck Shavit
  • Publication number: 20100070715
    Abstract: An apparatus, system, and method are disclosed for deduplicating storage cache data. A storage cache partition table has at least one entry associating a specified storage address range with one or more specified storage partitions. A deduplication module creates an entry in the storage cache partition table wherein the specified storage partitions contain identical data to one another within the specified storage address range thus requiring only one copy of the identical data to be cached in a storage cache. A read module accepts a storage address within a storage partition of a storage subsystem, to locate an entry wherein the specified storage address range contains the storage address, and to determine whether the storage partition is among the one or more specified storage partitions if such an entry is found.
    Type: Application
    Filed: September 18, 2008
    Publication date: March 18, 2010
    Inventors: Rod D. Waltermann, Mark Charles Davis
  • Publication number: 20100070701
    Abstract: Embodiments of the invention provide techniques for ensuring that the contents of a non-volatile memory device may be relied upon as accurately reflecting data stored on disk storage across a power transition such as a reboot. For example, some embodiments of the invention provide techniques for determining whether the cache contents and/or or disk contents are modified during a power transition, causing cache contents to no longer accurately reflect data stored in disk storage. Further, some embodiments provide techniques for managing cache metadata during normal (“steady state”) operations and across power transitions, ensuring that cache metadata may be efficiently accessed and reliably saved and restored across power transitions.
    Type: Application
    Filed: November 14, 2008
    Publication date: March 18, 2010
    Applicant: Microsoft Corporation
    Inventors: Mehmet Iyigun, Yevgeniy Bak, Michael Fortin, David Fields, Cenk Ergan, Alexander Kirshenbaum
  • Publication number: 20100057994
    Abstract: Device and method for controlling caches, comprising a decoder configured to decode additional information of datasets retrievable from a memory, wherein the decoded additional information is configured to control whether particular ones of the datasets are to be stored in a cache.
    Type: Application
    Filed: August 29, 2008
    Publication date: March 4, 2010
    Applicant: Infineon Technologies AG
    Inventor: Jens Barrenscheen
  • Publication number: 20100049921
    Abstract: Systems and methods for distributed shared caching in a clustered file system, wherein coordination between the distributed caches, their coherency and concurrency management, are all done based on the granularity of data segments rather than files. As a consequence, this new caching system and method provides enhanced performance in an environment of intensive access patterns to shared files.
    Type: Application
    Filed: August 25, 2008
    Publication date: February 25, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lior Aronovich, Ron Asher
  • Publication number: 20100037026
    Abstract: A method and a device are disclosed for a cache memory refill control.
    Type: Application
    Filed: August 8, 2008
    Publication date: February 11, 2010
    Applicant: Infineon Technologies AG
    Inventors: Remi Hardy, Vincent Rezard
  • Publication number: 20100030972
    Abstract: Device, system and method of accessing data stored in a memory. For example, a device may include a memory to store a plurality of data items to be accessed by a processor; a cache manager to manage, a cache within the memory, the cache including a plurality of pointer entries, wherein each pointer entry includes an identifier of a respective data item and a pointer to an address of the data item; and a search module to receive from the cache manager an identifier of a requested data item, search the plurality of pointer entries for the identifier of the requested data item and, if a pointer entry is detected to include an identifier of a respective data item that matches the identifier of the requested data item then, provide the cache manager with the pointer from the detected entry. Other embodiments are described and claimed.
    Type: Application
    Filed: July 29, 2008
    Publication date: February 4, 2010
    Applicant: Entropic Communications, Inc.
    Inventor: Ilia Greenblat
  • Publication number: 20100030970
    Abstract: Improving cache performance in a data processing system is provided. A cache controller monitors a counter associated with a cache. The cache controller determines whether the counter indicates that a plurality of non-dedicated cache sets within the cache should operate as spill cache sets or receive cache sets. The cache controller sets the plurality of non-dedicated cache sets to spill an evicted cache line to an associated cache set in another cache in the event of a cache miss in response to an indication that the plurality of non-dedicated cache sets should operate as the spill cache sets. The cache controller sets the plurality of non-dedicated cache sets to receive an evicted cache line from another cache set in the event of the cache miss in response to an indication that the plurality of non-dedicated cache sets should operate as the receive cache sets.
    Type: Application
    Filed: August 1, 2008
    Publication date: February 4, 2010
    Applicant: International Business Machines Corporation
    Inventor: Moinuddin K. Qureshi
  • Publication number: 20100023699
    Abstract: A system and a method are described, whereby a data cache enables the realization of an efficient design of a usage analyzer for monitoring subscriber access to a communications network. By exploiting the speed advantages of cache memory, as well as adopting innovative data loading and retrieval choices, significant performance improvements in the time required to access the necessary data records can be realized.
    Type: Application
    Filed: July 22, 2008
    Publication date: January 28, 2010
    Applicant: Bridgewater Systems Corp.
    Inventors: Timothy James Reidel, Li Zou