Multi-user, Multiprocessor, Multiprocessing Cache Systems (epo) Patents (Class 711/E12.023)
  • Publication number: 20090313435
    Abstract: In one embodiment, the present invention includes a directory to aid in maintaining control of a cache coherency protocol. The directory can be coupled to multiple caching agents via an interconnect, and be configured to store a entries associated with cache lines. The directory also includes logic to determine a time delay before the directory can send a concurrent snoop request. Other embodiments are described and claimed.
    Type: Application
    Filed: June 13, 2008
    Publication date: December 17, 2009
    Inventors: Hariharan Thantry, Akhilesh Kumar, Seungjoon Park
  • Publication number: 20090287885
    Abstract: Administering non-cacheable memory load instructions in a computing environment where cacheable data is produced and consumed in a coherent manner without harming performance of a producer, the environment including a hierarchy of computer memory that includes one or more caches backed by main memory, the caches controlled by a cache controller, at least one of the caches configured as a write-back cache. Embodiments of the present invention include receiving, by the cache controller, a non-cacheable memory load instruction for data stored at a memory address, the data treated by the producer as cacheable; determining by the cache controller from a cache directory whether the data is cached; if the data is cached, returning the data in the memory address from the write-back cache without affecting the write-back cache's state; and if the data is not cached, returning the data from main memory without affecting the write-back cache's state.
    Type: Application
    Filed: May 15, 2008
    Publication date: November 19, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jon K. Kriegel, Jamie R. Kuesel
  • Publication number: 20090271573
    Abstract: A system and method for decreasing system management data access time. A system includes a device, a cache memory coupled to the device, and a cache memory refresh controller. The device provides system management information. The cache memory stores system management information. The system management information stored in the cache is partitioned into a first portion and a second portion. The cache refresh program refreshes the system management information stored in the cache memory. The first portion is refreshed after expiration of a predetermined refresh time interval. The second portion is refreshed when the second portion is accessed.
    Type: Application
    Filed: June 24, 2008
    Publication date: October 29, 2009
    Inventor: Shivkumar Kannan
  • Publication number: 20090248986
    Abstract: A novel and useful mechanism enabling the partitioning of a normally shared L1 data cache into several different independent caches, wherein each cache is dedicated to a specific data type. To further optimize performance each individual L1 data cache is placed in relative close physical proximity to its associated register files and functional unit. By implementing separate independent L1 data caches, the content based data cache mechanism of the present invention increases the total size of the L1 data cache without increasing the time necessary to access data in the cache. Data compression and bus compaction techniques that are specific to a certain format can be applied each individual cache with greater efficiency since the data in each cache is of a uniform type.
    Type: Application
    Filed: March 26, 2008
    Publication date: October 1, 2009
    Inventors: Daniel Citron, Moshe Klausner
  • Publication number: 20090182940
    Abstract: A storage control system in which a first storage controller is connected to a storage device in a second storage controller and the first storage controller is configured to be able to read and write data from/to the storage device in the second storage controller in response to a request from a host device connected to the first storage apparatus, the first storage controller including: a controller for controlling data transmission and reception between the host device and the storage device in the second storage controller; and a cache memory for temporarily storing the data, wherein the controller sets a threshold value for storage capacity in the cache memory assigned to the storage device according to the properties of the storage device.
    Type: Application
    Filed: December 29, 2008
    Publication date: July 16, 2009
    Inventors: Jun MATSUDA, Mikio Fukuoka, Keishi Tamura
  • Publication number: 20090177841
    Abstract: Techniques for maintaining consistent replicas of data are disclosed. By way of example, a method for managing copies of objects within caches, in a system including multiple caches, includes the following steps. Consistent copies of objects are maintained within the caches. A home cache for each object is maintained, wherein the home cache maintains information identifying other caches likely containing a copy of the object. In response to a request to update an object, the home cache for the object is contacted to identify other caches which might have copies of the object.
    Type: Application
    Filed: January 9, 2008
    Publication date: July 9, 2009
    Inventors: Judah M. Diament, Arun Kwangil Iyengar, Thomas A. Mikalsen, Isabelle Marie Rouvellou
  • Publication number: 20090150619
    Abstract: A multi processor system 1 comprises a plurality of processors 21 to 25, a system bus 30 and a main system memory 40. Each processor 21 to 25 is connected to a respective cache memory 41 to 45, with each cache memory 41 to 45 in turn being connected to the system bus 30. The cache memories 41 to 45 store copies of data or instructions that are used frequently by the respective processors 21 to 25, thereby eliminating the need for the processors 21 to 25 to access the main system memory 40 during each read or write operation. Processor 25 is connected to a local memory 50 having a plurality of data blocks (not shown). According to the invention, the local memory 50 has a first port 51 for connection to its respective processor 25. In addition, the local memory 50 has a second port 52 connected to the system bus 30, thereby allowing one or more of the other processors 21 to 24 to access the local memory 50.
    Type: Application
    Filed: November 8, 2005
    Publication date: June 11, 2009
    Applicant: KONINKLIJKE PHILIPS ELECTRONICS, N.V.
    Inventor: Jan Hoogerbrugge
  • Publication number: 20090119457
    Abstract: A multithreaded clustered microarchitecture with dynamic back-end assignment is presented. A processing system may include a plurality of instruction caches and front-end units each to process an individual thread from a corresponding one of the instruction caches, a plurality of back-end units, and an interconnect network to couple the front-end and back-end units. A method may include measuring a performance metric of a back-end unit, comparing the measurement to a first value, and reassigning, or not, the back-end unit according to the comparison. Computer systems according to embodiments of the invention may include: a random access memory; a system bus; and a processor having a plurality of instruction caches, a plurality of front-end units each to process an individual thread from a corresponding one of the instruction caches; a plurality of back-end units; and an interconnect network coupled to the plurality of front-end units and the plurality of back-end units.
    Type: Application
    Filed: January 9, 2009
    Publication date: May 7, 2009
    Applicant: INTEL CORPORATION
    Inventors: Fernando LATORRE, Jose GONZALEZ, Antonio GONZALEZ
  • Publication number: 20090106522
    Abstract: An electronic system is provided including powering a computing integrated circuit device having a first processor device and a second processor device; generating an address transform for the first processor device and the second processor device; operating a software code having a first processor address for the first processor device and a second processor address for the second processor device with the software code provides a display or actuates a mechanic device; mapping the first processor address with the address transform to the second processor address; and reconfiguring the address transform.
    Type: Application
    Filed: October 18, 2007
    Publication date: April 23, 2009
    Applicants: SONY CORPORATION, SONY ELECTRONICS INC.
    Inventor: Yosuke Muraki
  • Publication number: 20090063775
    Abstract: The present invention provides a system and a method for a cache partitioning technique for application tasks based on the scheduling information in multiprocessors. Cache partitioning is performed dynamically based on the information of the pattern of task scheduling provided by the task scheduler (405). Execution behavior of the application tasks is obtained from the task scheduler (405) and partitions are allocated (415) to only a subset of application tasks, which are going to be executed in the upcoming clock cycles. The present invention will improve the cache utilization by avoiding unnecessary reservation of the cache partitions for the executing application tasks during the entire duration of their execution and hence an effective utilization of the cache is achieved.
    Type: Application
    Filed: September 20, 2006
    Publication date: March 5, 2009
    Inventors: Jeroen Molema, Wilko Westerhof, Bartele Henrik De Vries, Reinier Niels Lap, Olaf Martin De Jong, Bart-Jan Zwart, Johannes Rogier De Vrind
  • Publication number: 20090024798
    Abstract: The invention provides a method of storing data in a computing device, the method including the steps of creating a memory file system in non-pageable kernel memory of the computing device, writing data to the memory file system and transferring the written data to a pageable memory space allocated to a user process running on the computing device. An advantage of such a design is that, initially, the data of the memory based file system can be kept in the non-pageable kernel memory, minimising the need to perform context switches. However, the data can be transferred to pageable memory when necessary, such that the amount of kernel memory used by the file system can be minimised.
    Type: Application
    Filed: July 16, 2008
    Publication date: January 22, 2009
    Applicant: Hewlett-Packard Development Company, L.P.
    Inventor: Alban Kit Kupar War Lyndem
  • Publication number: 20090019266
    Abstract: With respect to memory access instructions contained in an internal representation program, an information processing apparatus generates a load cache instruction, a cache hit judgment instruction, and a cache miss instruction that is executed in correspondence with a result of a judgment process performed according to the cache hit judgment instruction. In a case where the internal representation program contains a plurality of memory access instructions having a possibility of causing accesses to mutually the same cache line in a cache memory, the information processing apparatus generates a combine instruction instructing that judgment results of the judgment processes that are performed according to the cache hit judgment instruction should be combined into one judgment result. The information processing apparatus outputs an output program that contains these instructions that have been generated.
    Type: Application
    Filed: February 26, 2008
    Publication date: January 15, 2009
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventor: Seiji MAEDA
  • Publication number: 20090006758
    Abstract: A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses.
    Type: Application
    Filed: September 9, 2008
    Publication date: January 1, 2009
    Inventors: Vicente Enrique Chung, Guy Lynn Guthrie, Willliam John Starke, Jeffrey Adam Stuecheli
  • Publication number: 20090006759
    Abstract: A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses.
    Type: Application
    Filed: September 9, 2008
    Publication date: January 1, 2009
    Inventors: Vicente Enrique Chung, Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli
  • Publication number: 20080320223
    Abstract: A cache controller that writes data to a cache memory, includes a first buffer unit that retains data flowing in from outside to be written to the cache memory, a second buffer unit that retains a data piece to be currently written to the cache memory, among pieces of the data retained in the first buffer unit, and a write controlling unit that controls writing of the data piece retained in the second buffer unit to the cache memory.
    Type: Application
    Filed: August 26, 2008
    Publication date: December 25, 2008
    Applicant: FUJITSU LIMITED
    Inventor: Masaki Ukai
  • Publication number: 20080320230
    Abstract: Livelocks are prevented in multiple core processors by verifying that a data access request is still valid before sending messages to processor cores that may cause other data access requests to fail. A cache coherency manager receives data access requests from multiple processor cores. Upon receiving a data access request that may cause a livelock, the cache coherency manager first sends an intervention message back to the requesting processor core to confirm that this data access request will succeed. If the requesting processor core determines that the data access request is still valid, it directs the cache coherency manager to proceed with the data access request. The cache coherency manager may then send intervention messages to other processor cores to complete the data access request. If the requesting processor core determines that the data access request is invalid, it directs the cache coherency manager to abandon the data access request.
    Type: Application
    Filed: June 22, 2007
    Publication date: December 25, 2008
    Applicant: MIPS TECHNOLOGIES INC.
    Inventors: Sanjay Vishin, Ryan C. Kinter
  • Publication number: 20080301368
    Abstract: Upon retrieving, after occurrence of replacement of a first cache, move out (MO) data that is a write back target, a second cache determines, based on data that is set in a control flag of a register, whether a new registration process of move in (MI) data with respect to a recording position of the MO data is completed. Upon determining that the new registration process is not completed, the second cache cancels the new registration process to ensure that a request of the new registration process is not output to a pipeline.
    Type: Application
    Filed: August 7, 2008
    Publication date: December 4, 2008
    Applicant: Fujitsu Limited
    Inventors: Hiroyuki Kojima, Masaki Ukai
  • Publication number: 20080155200
    Abstract: A processor includes a processor core coupled to an address translation storage structure. The address translation storage structure includes a plurality of entries, each corresponding to a memory page. Each entry also includes a physical address of a memory page, and a private page indication that indicates whether any other processors have an entry, in either a respective address translation storage structure or a respective cache memory, that maps to the memory page. The processor also includes a memory controller that may inhibit issuance of a probe message to other processors in response to receiving a write memory request to a given memory page. The write request includes a private page attribute that is associated with the private page indication, and indicates that no other processor has an entry, in either the respective address translation storage structure or the respective cache memory, that maps to the memory page.
    Type: Application
    Filed: December 21, 2006
    Publication date: June 26, 2008
    Inventor: Patrick N. Conway
  • Publication number: 20080147975
    Abstract: A processor having multiple cores and a multiple cache segments, each core associated with one of the cache segments, the cache segments interconnected by a data communication ring, and logic to disallow operation of the ring at a startup event and to execute an initialization sequence at one or more of the cores so that each of the one or more of the cores operates using the cache segment associated with the core as a read-write memory during the initialization sequence
    Type: Application
    Filed: December 13, 2006
    Publication date: June 19, 2008
    Inventors: Vincent J. Zimmer, Michael A. Rothman
  • Publication number: 20080126710
    Abstract: The present invention discloses a method for processing cache data, which is used in a dual redundant server system having a console end and a redundant control end. The console end mirrors a cache data saved in the console end into a mirrored cache data and sends the mirrored cache data to the redundant control end through a transmission unit. If the console end determines that the redundant control end cannot save the mirrored cache data, the console end will flush the cache data into a hard disk installed at the console end.
    Type: Application
    Filed: November 29, 2006
    Publication date: May 29, 2008
    Applicant: INVENTEC CORPORATION
    Inventor: Chih-Wei Chen