Partitioned Cache Patents (Class 711/129)
  • Patent number: 7716720
    Abstract: The present invention is directed to a system for providing a trusted environment for untrusted computing systems. The system may include a HAC subsystem managing shared resources and a trusted bus switch for controlling a COTS processor to access the shared resources. The shared resources such as memory and several I/O resources reside on the trusted side of the trusted bus switch. Alternatively, the system may include a SCM as an add-on module to an untrusted host environment. Only authenticated applications including COTS OS execute on the SCM while untrusted applications execute on the untrusted host environment. The SCM may control secure resource access from the untrusted host through a plug-in module interface. All secure resources may be maintained on the trusted side of the plug-in module interface.
    Type: Grant
    Filed: June 17, 2005
    Date of Patent: May 11, 2010
    Assignee: Rockwell Collins, Inc.
    Inventors: James A. Marek, David S. Hardin, Raymond A. Kamin, III, Steven E. Koenck, Allen P. Mass
  • Patent number: 7698496
    Abstract: A cache miss judger judges a cache miss when a cache access is executed. An entry region judger judges which of a plurality of entry regions constituted with one or a plurality of cache entries in the cache memory is accessed by each of the cache accesses using at least a part of an index for selecting an arbitrary cache line in the cache memory. A cache miss counter counts number of the cache misses judged by the cache miss judger in each of the entry regions that is made to correspond to each of the cache accesses.
    Type: Grant
    Filed: January 31, 2007
    Date of Patent: April 13, 2010
    Assignee: Panasonic Corporation
    Inventor: Genichiro Matsuda
  • Patent number: 7698506
    Abstract: A technique for partially offloading, from a main cache in a storage server, the storage of cache tags for data blocks in a victim cache of the storage server, is described. The technique includes storing, in the main cache, a first subset of the cache tag information for each of the data blocks, and storing, in a victim cache of the storage server, a second subset of the cache tag information for each of the data blocks. This technique avoids the need to store the second subset of the cache tag information in the main cache.
    Type: Grant
    Filed: April 26, 2007
    Date of Patent: April 13, 2010
    Assignee: Network Appliance, Inc.
    Inventors: Robert L. Fair, William P. McGovern, Thomas C. Holland, Jason Sylvain
  • Patent number: 7693883
    Abstract: A system to delete a data volume may include storage of a plurality of data pages of the data volume of a data area into a cache, prevention of writing of data pages to the data volume, and designation of each of the plurality of data pages in the cache as modified. The system may also include writing of all data pages in the cache that are designated as modified to a respective location in one or more other data volumes of the data area, and updating, for each of the written data pages, a converter page of the cache to associate the written data page with its respective location in the one or more other data volumes.
    Type: Grant
    Filed: January 30, 2006
    Date of Patent: April 6, 2010
    Assignee: SAP AG
    Inventors: Henrik Hempelmann, Torsten Strahl
  • Patent number: 7694094
    Abstract: A transaction method manages the storing of persistent data to be stored in at least one memory region of a non-volatile memory device before the execution of update operations that involve portions of the persistent data. Values of the persistent data are stored in a transaction stack that includes a plurality of transaction entries before the beginning of the update operations so that the memory regions involved in such an update are restored in a consistent state if an unexpected event occurs. A push extreme instruction reads from the memory cells a remaining portion of the persistent data that is not involved in the update operation, and stores the remaining portion in a subset of the transaction entries. The push extreme instruction is executed instead of a push instruction when the restoring of the portion of persistent data is not required after the unexpected event. The restoring corresponds to the values that the persistent data had before the beginning of the update operations.
    Type: Grant
    Filed: June 29, 2007
    Date of Patent: April 6, 2010
    Assignee: Incard S.A.
    Inventors: Paolo Sepe, Luca Di Cosmo, Enrico Musella
  • Publication number: 20100082906
    Abstract: In some embodiments, a processor-based system includes a processor, a system memory coupled to the processor, a mass storage device, a cache memory located between the system memory and the mass storage device, and code stored on the processor-based system to cause the processor-based system to utilize the cache memory. The code may be configured to cause the processor-based system to preferentially use only a selected size of the cache memory to store cache entries having less than or equal to a selected number of cache hits. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: September 30, 2008
    Publication date: April 1, 2010
    Inventors: Glenn Hinton, Dale Juenemann, R. Scott Tetrick
  • Patent number: 7689612
    Abstract: A query of a meta-object facility repository that includes transient data being modified or processed in memory and persisted data can be received. Thereafter, portions of the received query can be executed on partitions associated with the persisted data and other portions of the received query can be executed on partitions of the repository associated with the transient data to generate a combined correct query result set. Related apparatus, systems, methods, and articles are also described.
    Type: Grant
    Filed: April 19, 2007
    Date of Patent: March 30, 2010
    Assignee: SAP AG
    Inventors: Simon P. Helsen, Stephan J. Lange
  • Publication number: 20100077151
    Abstract: A computer system includes a data cache supported by a copy-back buffer and pre-allocation request stack. A programmable trigger mechanism inspects each store operation made by the processor to the data cache to see if a next cache line should be pre-allocated. If the store operation memory address occurs within a range defined by START and END programmable registers, then the next cache line that includes a memory address within that defined by a programmable STRIDE register is requested for pre-allocation. Bunches of pre-allocation requests are organized and scheduled by the pre-allocation request stack, and will take their turns to allow the cache lines being replaced to be processed through the copy-back buffer. By the time the processor gets to doing the store operation in the next cache line, such cache line has already been pre-allocated and there will be a cache hit, thus saving stall cycles.
    Type: Application
    Filed: January 24, 2008
    Publication date: March 25, 2010
    Applicant: NXP, B.V.
    Inventor: Jan Willem Van De Waerdt
  • Publication number: 20100070715
    Abstract: An apparatus, system, and method are disclosed for deduplicating storage cache data. A storage cache partition table has at least one entry associating a specified storage address range with one or more specified storage partitions. A deduplication module creates an entry in the storage cache partition table wherein the specified storage partitions contain identical data to one another within the specified storage address range thus requiring only one copy of the identical data to be cached in a storage cache. A read module accepts a storage address within a storage partition of a storage subsystem, to locate an entry wherein the specified storage address range contains the storage address, and to determine whether the storage partition is among the one or more specified storage partitions if such an entry is found.
    Type: Application
    Filed: September 18, 2008
    Publication date: March 18, 2010
    Inventors: Rod D. Waltermann, Mark Charles Davis
  • Patent number: 7664916
    Abstract: Methods and apparatuses are provided for use with smartcards or other like shared computing resources. A global smartcard cache is maintained on one or more computers to reduce the burden on the smartcard. The global smartcard cache data is associated with a freshness indicator that is compared to the current freshness indicator from the smartcard to verify that the cached item data is current.
    Type: Grant
    Filed: January 6, 2004
    Date of Patent: February 16, 2010
    Assignee: Microsoft Corporation
    Inventors: Daniel C. Griffin, Eric C. Perlin, Klaus U. Schutz
  • Publication number: 20100030971
    Abstract: To provide a cache system that can dynamically change a cache capacity by memory areas divided into plural. The cache system includes a line counter that counts the number of effective lines for each memory area. The effective line is a cache line in which effective cache data is stored. Cache data to be invalidated at the time of changing the cache capacity is selected based on the number of effective lines counted by the line counter.
    Type: Application
    Filed: May 20, 2009
    Publication date: February 4, 2010
    Applicant: Kabushiki Kaisha Toshiba
    Inventor: Hiroyuki USUI
  • Publication number: 20100017568
    Abstract: In one embodiment, a cache comprises a data memory comprising a plurality of data entries, each data entry having capacity to store a cache block of data, and a cache control unit coupled to the data memory. The cache control unit is configured to dynamically allocate a given data entry in the data memory to store a cache block being cached or to store data that is not being cache but is being staged for retransmission on an interface to which the cache is coupled.
    Type: Application
    Filed: September 24, 2009
    Publication date: January 21, 2010
    Inventors: Ruchi Wadhawan, Jason M. Kassoff, George Kong Yiu
  • Patent number: 7650386
    Abstract: A computing device having partitions, and a method of communicating between partitions, are disclosed wherein each partition comprises at least one address area readable but not writable from the other of the at least two partitions. In one embodiment one partition sends to the other partition a request for information, which information is in the other partition in an address area not accessible to the one partition, the other partition copies the information to an address area accessible to the one partition, and the one partition reads the information from the accessible address area. In another embodiment the at least one accessible address area of each partition includes a data area and a consumer pointer indicating the position to which that partition has read the data area in another partition.
    Type: Grant
    Filed: July 29, 2004
    Date of Patent: January 19, 2010
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Larry N. McMahan, Gary Belgrave Gostin, Joe P. Cowan, Michael R. Krause
  • Patent number: 7650466
    Abstract: A method of managing cache partitions provides a first pointer for higher priority writes and a second pointer for lower priority writes, and uses the first pointer to delimit the lower priority writes. For example, locked writes have greater priority than unlocked writes, and a first pointer may be used for locked writes, and a second pointer may be used for unlocked writes. The first pointer is advanced responsive to making locked writes, and its advancement thus defines a locked region and an unlocked region. The second pointer is advanced responsive to making unlocked writes. The second pointer also is advanced (or retreated) as needed to prevent it from pointing to locations already traversed by the first pointer. Thus, the pointer delimits the unlocked region and allows the locked region to grow at the expense of the unlocked region.
    Type: Grant
    Filed: September 21, 2005
    Date of Patent: January 19, 2010
    Assignee: QUALCOMM Incorporated
    Inventors: Brian Michael Stempel, James Norris Dieffenderfer, Jeffrey Todd Bridges, Thomas Andrew Sartorius, Rodney Wayne Smith, Robert Douglas Clancy, Victor Roberts Augsburg
  • Patent number: 7644232
    Abstract: A cache method and a cache system for storing file's data in memory blocks divided from a cache memory are disclosed. The cache method for storing data of a file in memory blocks divided from a cache memory includes: receiving a request command for first data of a first file from an application; retrieving a memory block that stores the first data; setting reference information which indicates that the memory block storing the first data is referred to by the application; transmitting the first data stored in the memory block to the application; and resetting the reference information for the memory block that stores the first data.
    Type: Grant
    Filed: August 11, 2006
    Date of Patent: January 5, 2010
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Min-woo Jung, Chun-un Kang, Ki-won Kwak
  • Patent number: 7644235
    Abstract: In a cache tag integrated on an SRAM with a memory cache, laser fuses are programmed to indicate which, if any, tag subarrays in the cache tag are not functioning properly. In addition, the burst length of the SRAM is increased to reduce the number of tag subarrays necessary for operation of the cache tag so any nonfunctional tag subarrays are no longer necessary. In accordance with the indications from the programmed laser fuses and the increased burst length, logic circuitry disables any nonfunctional tag subarrays, leaving only functional tag subarrays to provide tag functionality for the memory cache. As a result, an SRAM that is typically scrapped as a result of nonfunctional tag subarrays can, instead, be recovered for sale and subsequent use.
    Type: Grant
    Filed: August 3, 2006
    Date of Patent: January 5, 2010
    Assignee: Micron Technology, Inc.
    Inventor: Joseph T. Pawlowski
  • Patent number: 7644252
    Abstract: A multiprocessor system includes a plurality of microprocessors configured to operate on a plurality of operating systems, respectively, and a memory section configured to have a plurality of memory spaces respectively allocated to the plurality of microprocessors. Each of the plurality of microprocessors may include a translation look-aside buffer (TLB) and a page table register. The TLB stores a copy of at least a part of data of one of the plurality of memory spaces corresponding to the microprocessor, and the copy includes a relation of each of virtual addresses of a virtual address space and a corresponding physical address of a physical address space as the memory space. The page table register refers to the TLB in response to an execution virtual address generated based on an application program to be executed by the microprocessor to determine an execution physical address corresponding to the execution virtual address.
    Type: Grant
    Filed: October 31, 2007
    Date of Patent: January 5, 2010
    Assignee: NEC Corporation
    Inventor: Eiichiro Kawaguchi
  • Publication number: 20090328022
    Abstract: Systems, methods and media for updating CRTM code in a computing machine are disclosed. In one embodiment, the CRTM code initially resides in ROM and updated CRTM is stored in a staging area of the ROM. A logical partition of L2 cache may be created to store a heap and a stack and a data store. The data store holds updated CRTM code copied to the L2 cache. When a computing system is started, it first executes CRTM code. The CRTM code checks the staging area of the ROM to determine if there is updated CRTM code. If so, then CRTM code is copied into the L2 cache to be executed from there. The CRTM code loads the updated code into the cache and verifies its signature. The CRTM code then copies the updated code into the cache where the current CRTM code is located.
    Type: Application
    Filed: June 26, 2008
    Publication date: December 31, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Sean P. Brogan, Sumeet Kochar
  • Patent number: 7640398
    Abstract: A memory circuit and a method of operating a flash or EEPROM device that has two levels of internal cache. A memory device having a memory array, sense amplifiers, a data register, cache, an input-output circuit, and a control logic circuit is configured to output data while simultaneously reading data from the memory array to the data register or simultaneously copying data from the data register to a first level of internal cache. In addition, the memory device is configured to output data while simultaneously writing data from the data register to the memory array.
    Type: Grant
    Filed: July 11, 2005
    Date of Patent: December 29, 2009
    Assignee: Atmel Corporation
    Inventor: Vijay P. Adusumilli
  • Patent number: 7636810
    Abstract: In one embodiment, a system includes a main memory including a compression cache to store uncompressed data, where the compression cache is organized as a sectored cache having on-die associated tags. On a tag match to an associated tag, a hit signal is sent to a memory controller coupled to the main memory to schedule an uncompressed data access from the compression cache. A compressed memory may be present to store a plurality of compressed data. Also, a higher priority may be assigned to read operations of the compressed memory in comparison to other operations to the compressed memory. Other embodiments are described and claimed.
    Type: Grant
    Filed: November 26, 2003
    Date of Patent: December 22, 2009
    Assignee: Intel Corporation
    Inventor: Siva Ramakrishnan
  • Publication number: 20090313437
    Abstract: In an IPTV network, one or more caches may be provided at the network nodes for storing video content in order to reduce bandwidth requirements. Cache functions such as cache effectiveness and cacheability may be defined and optimized to determine optimal partitioning of cache memory for caching the unicast services of the IPTV network.
    Type: Application
    Filed: August 18, 2009
    Publication date: December 17, 2009
    Applicant: ALCATEL-LUCENT USA INC.
    Inventors: Lev B. Sofman, Bill Krogfoss, Anshul Agrawal
  • Publication number: 20090313436
    Abstract: A cache region can be created in a cache in response to receiving a cache region creation request from an application. A storage request from the application can identify the cache region and one or more objects to be stored in the cache region. Those objects can be stored in the cache region in response to receiving the storage request.
    Type: Application
    Filed: May 14, 2009
    Publication date: December 17, 2009
    Applicant: Microsoft Corporation
    Inventors: Muralidhar Krishnaprasad, Anil Nori, Subramanian Muralidhar, Sudhir Mohan Jorwekar, Lakshmi Suresh Goduguluru
  • Patent number: 7627718
    Abstract: A processor having multiple cores and a multiple cache segments, each core associated with one of the cache segments, the cache segments interconnected by a data communication ring, and logic to disallow operation of the ring at a startup event and to execute an initialization sequence at one or more of the cores so that each of the one or more of the cores operates using the cache segment associated with the core as a read-write memory during the initialization sequence
    Type: Grant
    Filed: December 13, 2006
    Date of Patent: December 1, 2009
    Assignee: Intel Corporation
    Inventors: Vincent J. Zimmer, Michael A. Rothman
  • Patent number: 7627714
    Abstract: An apparatus, system, and method are disclosed for preventing write starvation in a storage controller with access to low performance storage devices. A storage device allocation module is included to assign a storage device write cache limit for each storage device accessible to a storage controller. The storage device write cache limit comprises a maximum amount of write cache of the storage controller available to a storage device for a write operation. At least one storage device comprises a low performance storage device and a total amount of storage available to the storage devices comprises an amount greater than a total storage capacity of the write cache. A low performance write cache limit module is included to set a low performance write cache limit. The low performance write cache limit comprises an amount of write cache available for use by the at least one low performance storage device for a write operation.
    Type: Grant
    Filed: August 22, 2006
    Date of Patent: December 1, 2009
    Assignee: International Business Machines Corporation
    Inventors: Kevin John Ash, Matthew Joseph Kalos, Robert Akira Kubo
  • Publication number: 20090292880
    Abstract: A cache memory system controlled by an arbiter includes a memory unit having a cache memory whose capacity is changeable, and an invalidation processing unit that requests invalidation of data stored at a position where invalidation is performed when the capacity of the cache memory is changed in accordance with a change instruction. The invalidation processing unit includes an increasing/reducing processing unit that sets an index to be invalidated in accordance with a capacity before change and a capacity after change and requests the arbiter to invalidate the set index, and an index converter that selects either an index based on the capacity before change or an index based on the capacity after change associated with an access address from the arbiter, and the capacity of the cache memory can be changed while maintaining the number of ways of the cache memory.
    Type: Application
    Filed: April 30, 2009
    Publication date: November 26, 2009
    Inventor: Hiroyuki USUI
  • Patent number: 7624235
    Abstract: In one embodiment, a cache comprises a data memory comprising a plurality of data entries, each data entry having capacity to store a cache block of data, and a cache control unit coupled to the data memory. The cache control unit is configured to dynamically allocate a given data entry in the data memory to store a cache block being cached or to store data that is not being cache but is being staged for retransmission on an interface to which the cache is coupled.
    Type: Grant
    Filed: November 30, 2006
    Date of Patent: November 24, 2009
    Assignee: Apple Inc.
    Inventors: Ruchi Wadhawan, Jason M. Kassoff, George Kong Yiu
  • Publication number: 20090271573
    Abstract: A system and method for decreasing system management data access time. A system includes a device, a cache memory coupled to the device, and a cache memory refresh controller. The device provides system management information. The cache memory stores system management information. The system management information stored in the cache is partitioned into a first portion and a second portion. The cache refresh program refreshes the system management information stored in the cache memory. The first portion is refreshed after expiration of a predetermined refresh time interval. The second portion is refreshed when the second portion is accessed.
    Type: Application
    Filed: June 24, 2008
    Publication date: October 29, 2009
    Inventor: Shivkumar Kannan
  • Patent number: 7606977
    Abstract: A multi-threaded processor adapted to couple to external memory comprises a controller and data storage operated by the controller. The data storage comprises a first portion and a second portion, and wherein only one of the first or second portions is active at a time, the non-active portion being unusable. When the active portion does not have sufficient capacity for additional data to be stored therein, the other portion becomes the active portion. Upon a thread switch from a first thread to a second thread, only one of the first or second portions is cleaned to the external memory if one of the first or second portions does not contain valid data.
    Type: Grant
    Filed: July 25, 2005
    Date of Patent: October 20, 2009
    Assignee: Texas Instruments Incorporated
    Inventors: Jean-Philippe Lesot, Gilbert Cabillic
  • Publication number: 20090248986
    Abstract: A novel and useful mechanism enabling the partitioning of a normally shared L1 data cache into several different independent caches, wherein each cache is dedicated to a specific data type. To further optimize performance each individual L1 data cache is placed in relative close physical proximity to its associated register files and functional unit. By implementing separate independent L1 data caches, the content based data cache mechanism of the present invention increases the total size of the L1 data cache without increasing the time necessary to access data in the cache. Data compression and bus compaction techniques that are specific to a certain format can be applied each individual cache with greater efficiency since the data in each cache is of a uniform type.
    Type: Application
    Filed: March 26, 2008
    Publication date: October 1, 2009
    Inventors: Daniel Citron, Moshe Klausner
  • Patent number: 7596664
    Abstract: This invention treats of a two-level cache management method for continuous media files of a proxy server. In the first level, the method reserves collapsed buffers in the cache for every active client attended by the proxy server. To save bandwidth and memory space collapsed buffers can be concatenated and overlapped when content belongs to the same continuous media file. The proxy manages collectively the collapsed buffers of each client, which cooperate by making its content available to the whole system, reducing traffic over the communication network and on the media-on-demand server. In the second level, the method allows proxy servers to cooperate between themselves, by concatenating collapsed buffers when necessary, increasing the amount of available shared media in the cache, saving bandwidth both on the media-on-demand server and on the communication network backbone.
    Type: Grant
    Filed: September 30, 2002
    Date of Patent: September 29, 2009
    Assignee: COPPE/UFRJ
    Inventors: Edison Ishikawa, Cláudio Luis Amorim
  • Patent number: 7596665
    Abstract: The present invention provides a mechanism for a processor to write data to a cache or other fast memory, without also writing it to main memory. Further, the data is “locked” into the cache or other fast memory until it is loaded for use. Data remains in the locking cache until it is specifically overwritten under software control. The locking cache or other fast memory can be used as additional system memory. In an embodiment of the invention, the locking cache is one or more sets of ways, but not all of the sets or ways, of a multiple set associative cache.
    Type: Grant
    Filed: October 18, 2007
    Date of Patent: September 29, 2009
    Assignee: International Business Machines Corporation
    Inventors: Michael Norman Day, Charles Johns, Thuong Truong
  • Publication number: 20090240891
    Abstract: Systems, methods and computer program products for data buffers partitioned from a cache array. An exemplary embodiment includes a method in a processor and for providing data buffers partitioned from a cache array, the method including clearing cache directories associated with the processor to an initial state, obtaining a selected directory state from a control register preloaded by the service processor, in response to the control register including the desired cache state, sending load commands with an address and data, loading cache lines and cache line directory entries into the cache and storing the specified data in the corresponding cache line.
    Type: Application
    Filed: March 19, 2008
    Publication date: September 24, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gary E. Strait, Deanna P. Dunn, Michael F. Fee, Pak-kin Mak, Robert J. Sonnelitter, III
  • Publication number: 20090235014
    Abstract: A computing system and a memory device are provided. The memory device includes a first memory having a first storage capacity, a second memory having a second storage capacity greater than the first storage capacity, and a controller to provide an external host with an address space corresponding to a third storage capacity, the third storage capacity being less than a sum of the first storage capacity and second storage capacity, wherein the controller, where data requested from the external host is stored in the first memory, transmits the requested data to the external host from the first memory, and where the requested data is not stored in the first memory, transmits the requested data to the external host from the second memory.
    Type: Application
    Filed: July 21, 2008
    Publication date: September 17, 2009
    Inventors: Keun Soo YIM, Jae Cheol Son, Bong Young Chung
  • Patent number: 7590802
    Abstract: The present invention provides a mechanism of storing data transferred from an I/O device, a network, or a disk into a portion of a cache or other fast memory, without also writing it to main memory. Further, the data is “locked” into the cache or other fast memory until it is loaded for use. Data remains in the locking cache until it is specifically overwritten under software control. In an embodiment of the invention, a processor can write data to the cache or other fast memory without also writing it to main memory. The portion of the cache or other fast memory can be used as additional system memory.
    Type: Grant
    Filed: October 19, 2007
    Date of Patent: September 15, 2009
    Assignee: International Business Machines Corporation
    Inventors: Michael Norman Day, Charles Johns, Thuong Truong
  • Publication number: 20090228658
    Abstract: A management method for a cache memory of a storage apparatus including: at least one volume for storing data to be accessed from a computer via a network; and a cache memory, to which an area for holding the data to be stored in said at least one volume is allocated for every said at least one volume, including the steps of: referring to a relation between predetermined volumes; and allocating an area, in which the data to be stored in a volume are held, to said volume on the basis of the relation between said volumes.
    Type: Application
    Filed: May 19, 2009
    Publication date: September 10, 2009
    Inventors: Takayuki NAGAI, Masayuki Yamamoto, Masayasu Asano
  • Publication number: 20090228657
    Abstract: An apparatus includes a vector unit to process a vector data, a cache memory which includes a plurality of cache lines to store a plurality of divisional data being sent from a main memory, each of the divisional data of vector data having been divided according to a capacity of a cache line, and a cache controller to send all of the divisional data as the vector data to the vector unit after the cache lines have stored all of the divisional data including the vector data.
    Type: Application
    Filed: February 6, 2009
    Publication date: September 10, 2009
    Applicant: NEC Corporation
    Inventor: Takashi Hagiwara
  • Patent number: 7584327
    Abstract: Embodiments of the invention relate to a method and system for caching data in a multiple-core system with shared cache. According to the embodiments, data used by the cores may be classified as being of one of predetermined types. The classification may enable efficiencies to be realized by performing different types of handling corresponding to different data types. For example, data classified as likely to be re-used may be stored in a shared cache, in a region of the shared cache that is closest to a core using the data. By storing the data this way, access time and energy consumption may be reduced if the data is subsequently retrieved for use by the core.
    Type: Grant
    Filed: December 30, 2005
    Date of Patent: September 1, 2009
    Assignee: Intel Corporation
    Inventors: Yen-Kuang Chen, Christopher J. Hughes
  • Patent number: 7581066
    Abstract: One embodiment of the invention employs techniques for providing isolation for exclusivity of operation. Isolation may exist between different application and/or different threads or virtual machines of the same application. In one embodiment, using a lock helps to ensure that operations are executed exclusive of each other.
    Type: Grant
    Filed: April 29, 2005
    Date of Patent: August 25, 2009
    Assignee: SAP AG
    Inventors: Dirk Marwinski, Petio G. Petev
  • Patent number: 7577854
    Abstract: An information storage apparatus capable of setting the number and sizes of partitioned areas resulting from partitioning a memory area based on a user's intention is provided. For this purpose, an information storage apparatus having a plurality of partitioned areas of different security levels in a memory area is provided with an area control section that controls addresses of partitioned areas in the memory area, an area update condition control section that controls update conditions when the number or sizes of partitioned areas are updated, an area update decision section that decides whether a partition request requesting updating of the number or sizes of partitioned areas satisfies the update conditions and an area update section that executes, when the partition request satisfies the update conditions, updating of the partitioned areas in the memory area according to the partition request.
    Type: Grant
    Filed: August 6, 2004
    Date of Patent: August 18, 2009
    Assignee: Panasonic Corporation
    Inventors: Masamoto Tanabiki, Kazunori Inoue, Hayashi Ito
  • Publication number: 20090198844
    Abstract: A programmable controller includes a CPU unit, a communication unit and peripheral units connected together through an internal bus. The communication unit has a bus master function, including a cache memory for recording IO data stored in the memory of an input-output unit. When a message is received, it is judged whether the IO data stored in the memory of the input-output unit specified by this message is updated or not. If the data are not updated, a response is created based on the IO data stored in the IO data stored in the cache memory. If the data are updated, the input-output unit is accessed and updated IO data are obtained and a response is created based on the obtained IO data.
    Type: Application
    Filed: February 6, 2009
    Publication date: August 6, 2009
    Applicant: OMRON CORPORATION
    Inventor: Shinichiro Kawaguchi
  • Publication number: 20090198901
    Abstract: A computer system includes a main memory for storing a large amount of data, a cache memory that can be accessed at a higher speed than the main memory, a memory replacement controller for controlling the replacement of data between the main memory and the cache memory, and a memory controller capable of allocating one or more divided portions of the cache memory to each process unit. The memory replacement controller stores priority information for each process unit, and replaces lines of the cache memory based on a replacement algorithm taking the priority information into consideration, wherein the divided portions of the cache memory are allocated so that the storage area is partially shared between process units, after which the allocated amounts of cache memory are changed automatically.
    Type: Application
    Filed: October 8, 2008
    Publication date: August 6, 2009
    Inventor: Yoshihiro Koga
  • Patent number: 7565490
    Abstract: Circuits, methods, and apparatus that provide an L2 cache that services requests out of order. This L2 cache processes requests that are hits without waiting for data corresponding to requests that are misses to be returned from a graphics memory. A first auxiliary memory, referred to as a side pool, is used for holding subsequent requests for data at a specific address while a previous request for data at that address is serviced by a frame buffer interface and graphics memory. This L2 cache may also use a second auxiliary memory, referred to as a take pool, to store requests or pointers to data that is ready to be retrieved from an L2 cache.
    Type: Grant
    Filed: December 20, 2005
    Date of Patent: July 21, 2009
    Assignee: NVIDIA Corporation
    Inventors: Christopher D. S. Donham, John S. Montrym, Patrick R. Marchand
  • Patent number: 7565492
    Abstract: A method for managing a cache is disclosed. A context switch is identified. It is determined whether an application running after the context switch requires protection. Upon determining that the application requires protection the cache is partitioned. According to an aspect of the present invention, a partitioned section of the cache is completely over written with data associated with the application. Other embodiments are described and claimed.
    Type: Grant
    Filed: August 31, 2006
    Date of Patent: July 21, 2009
    Assignee: Intel Corporation
    Inventors: Francis X. Mckeen, Leena K. Puthiyedath, Ernie Brickell, James B. Crossland
  • Publication number: 20090182946
    Abstract: A method and system for optimizing resource usage in an information retrieval system. Meta information in query results describes data items identified by identifiers. A chunk of the identifiers and a set of meta information are loaded into a first cache and a second cache, respectively. A portion of the set of meta information is being viewed by a user. The portion describes a data item identified by an identifier included in the chunk and in a sub-chunk of identifiers that identifies data items described by the set of meta information. If a position of the identifier in the sub-chunk satisfies a first criterion, then a second set of meta information is preloaded into the second cache. If a position of the identifier in the chunk satisfies a second criterion, then a second chunk of the identifiers is preloaded into the first cache.
    Type: Application
    Filed: January 15, 2008
    Publication date: July 16, 2009
    Inventors: Nianjun Zhou, Dikran S. Meliksetian, Yang Sun, Chuan Yang
  • Publication number: 20090182947
    Abstract: Embodiments of the present invention provide methods and systems for tuning the size of the cache. In particular, when a page fault occurs, non-resident page data is checked to determine if that page was previously accessed. If the page is found in the non-resident page data, an inter-reference distance for the faulted page is determined and the distance of the oldest resident page is determined. The size of the cache may then be tuned based on comparing the inter-reference distance of the newly faulted page relative to the distance of the oldest resident page.
    Type: Application
    Filed: March 20, 2009
    Publication date: July 16, 2009
    Applicant: RED HAT, INC.
    Inventor: Henri Han van RIEL
  • Patent number: 7562190
    Abstract: A proximity interconnect module includes a plurality of processors operatively connected to a plurality of off-chip cache memories by proximity communication. Due to the high bandwidth capability of proximity interconnect, enhancements to the cache protocol to improve latency may be made despite resulting increased bandwidth consumption.
    Type: Grant
    Filed: June 17, 2005
    Date of Patent: July 14, 2009
    Assignee: Sun Microsystems, Inc.
    Inventors: Michael J. Koster, Brian W. O'Krafka
  • Publication number: 20090177842
    Abstract: A data processing system for processing at least one application is provided. The data processing system comprises a processor (100) for executing the application. The system furthermore comprises a cache memory (200) being associated to the processor (100) for caching data and/or instructions for the processor (100). The system furthermore comprises a memory unit (400) for storing data and/or instructions for the application. The memory unit (400) comprises a plurality of memory partitions (401-404). Data with similar data attributes are stored in the same memory partition (401-404). A predefined prefetching pattern is associated to each of the memory partitions (401-404).
    Type: Application
    Filed: February 26, 2007
    Publication date: July 9, 2009
    Applicant: NXP B.V.
    Inventor: Milind Manohar Kulkarni
  • Patent number: 7558919
    Abstract: Described are techniques for determining a cache slot. A set of criteria for each of a plurality of families is received. A received data operation associated with a first of said plurality of families is obtained. It is determined, in accordance with the criteria associated with the received data operation, whether to allocate a cache slot in the cache for the received data operation. The criteria for the first family includes a minimum value and a maximum value used in determining a cache partition size range for the first family. The maximum value is used in determining a maximum cache partition size allowable for the first family.
    Type: Grant
    Filed: October 19, 2005
    Date of Patent: July 7, 2009
    Assignee: EMC Corporation
    Inventors: Yechiel Yochai, David Shadmon, Josef Ezra, Amnon Naamad, Lee W. Sapiro, Orit Levin-Michael
  • Patent number: 7558937
    Abstract: A disk array device having a plurality of hard disk units has a large-capacity memory mounted on a controller module which controls the whole device. The large-capacity memory has a system area managed by an OS and a cache area serving as a cache memory, and in addition, it has a table area which stores management/control information of the device and whose area size is changeable at an arbitrary instance. Therefore, it is possible to change the table area according to the state of the device in an active state without ON/OFF of a power source, so that an area not in use in the table area can be released for use as the cache memory. This makes it possible to appropriately varying the sizes of the table area and the cache area in an active state while the device is in operation, thereby realizing effective use of the large-capacity memory.
    Type: Grant
    Filed: January 31, 2005
    Date of Patent: July 7, 2009
    Assignee: Fujitsu Limited
    Inventors: Kazuo Nakashima, Osamu Kimura, Koji Uchida, Akihito Kobayashi
  • Publication number: 20090172449
    Abstract: Disclosed herein are approaches to reducing a guardband (margin) used for minimum voltage supply (Vcc) requirements for memory such as cache.
    Type: Application
    Filed: December 26, 2007
    Publication date: July 2, 2009
    Inventors: Ming Zhang, Chris Wilkerson, Greg Taylor, Randy J. Aksamlt, James Tschanz