Partitioned Cache, E.g., Separate Instruction And Operand Caches, Etc. (epo) Patents (Class 711/E12.046)
  • Publication number: 20100131715
    Abstract: A apparatus is provided for updating data within a business planning tool. The apparatus comprises a computer memory (22) arranged to store operational data in a plurality of line items (50), each line item (50) being arranged to represent operational data in data cells (52) occupying space in a plurality of dimensions (X, Y), and each line item (50) having data cells in a first dimension (Y) configured to represent the operational data in a at least one hierarchy level, and having data cells in a second dimension (X) arranged to represent the respective operational data over at least one time period.
    Type: Application
    Filed: November 19, 2009
    Publication date: May 27, 2010
    Inventor: Michael Peter Gould
  • Patent number: 7698496
    Abstract: A cache miss judger judges a cache miss when a cache access is executed. An entry region judger judges which of a plurality of entry regions constituted with one or a plurality of cache entries in the cache memory is accessed by each of the cache accesses using at least a part of an index for selecting an arbitrary cache line in the cache memory. A cache miss counter counts number of the cache misses judged by the cache miss judger in each of the entry regions that is made to correspond to each of the cache accesses.
    Type: Grant
    Filed: January 31, 2007
    Date of Patent: April 13, 2010
    Assignee: Panasonic Corporation
    Inventor: Genichiro Matsuda
  • Publication number: 20090328022
    Abstract: Systems, methods and media for updating CRTM code in a computing machine are disclosed. In one embodiment, the CRTM code initially resides in ROM and updated CRTM is stored in a staging area of the ROM. A logical partition of L2 cache may be created to store a heap and a stack and a data store. The data store holds updated CRTM code copied to the L2 cache. When a computing system is started, it first executes CRTM code. The CRTM code checks the staging area of the ROM to determine if there is updated CRTM code. If so, then CRTM code is copied into the L2 cache to be executed from there. The CRTM code loads the updated code into the cache and verifies its signature. The CRTM code then copies the updated code into the cache where the current CRTM code is located.
    Type: Application
    Filed: June 26, 2008
    Publication date: December 31, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Sean P. Brogan, Sumeet Kochar
  • Publication number: 20090228657
    Abstract: An apparatus includes a vector unit to process a vector data, a cache memory which includes a plurality of cache lines to store a plurality of divisional data being sent from a main memory, each of the divisional data of vector data having been divided according to a capacity of a cache line, and a cache controller to send all of the divisional data as the vector data to the vector unit after the cache lines have stored all of the divisional data including the vector data.
    Type: Application
    Filed: February 6, 2009
    Publication date: September 10, 2009
    Applicant: NEC Corporation
    Inventor: Takashi Hagiwara
  • Publication number: 20090198844
    Abstract: A programmable controller includes a CPU unit, a communication unit and peripheral units connected together through an internal bus. The communication unit has a bus master function, including a cache memory for recording IO data stored in the memory of an input-output unit. When a message is received, it is judged whether the IO data stored in the memory of the input-output unit specified by this message is updated or not. If the data are not updated, a response is created based on the IO data stored in the IO data stored in the cache memory. If the data are updated, the input-output unit is accessed and updated IO data are obtained and a response is created based on the obtained IO data.
    Type: Application
    Filed: February 6, 2009
    Publication date: August 6, 2009
    Applicant: OMRON CORPORATION
    Inventor: Shinichiro Kawaguchi
  • Publication number: 20090182946
    Abstract: A method and system for optimizing resource usage in an information retrieval system. Meta information in query results describes data items identified by identifiers. A chunk of the identifiers and a set of meta information are loaded into a first cache and a second cache, respectively. A portion of the set of meta information is being viewed by a user. The portion describes a data item identified by an identifier included in the chunk and in a sub-chunk of identifiers that identifies data items described by the set of meta information. If a position of the identifier in the sub-chunk satisfies a first criterion, then a second set of meta information is preloaded into the second cache. If a position of the identifier in the chunk satisfies a second criterion, then a second chunk of the identifiers is preloaded into the first cache.
    Type: Application
    Filed: January 15, 2008
    Publication date: July 16, 2009
    Inventors: Nianjun Zhou, Dikran S. Meliksetian, Yang Sun, Chuan Yang
  • Publication number: 20090164730
    Abstract: A cache that supports sub-socket partitioning is discussed. Specifically, the cache supports different quality of service levels and victim cache line selection for a cache miss operation. The different quality of service levels allow for programmable ceiling usage and floor usage thresholds that allow for different techniques for victim cache line selection.
    Type: Application
    Filed: November 7, 2008
    Publication date: June 25, 2009
    Inventors: Ajay Harikumar, Tessil Thomas, Biju Puthur Simon
  • Publication number: 20090119666
    Abstract: The present invention provides an apparatus for cooperative distributed task management in a storage subsystem with multiple controllers using cache locking. The present invention distributes a task across a set of controllers acting in a cooperative rather than a master/slave nature to perform discrete components of the subject task on an as-available basis. This minimizes the amount of time required to perform incidental data manipulation tasks, thus reducing the duration of instances of degraded system performance.
    Type: Application
    Filed: January 5, 2009
    Publication date: May 7, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Brian Dennis McKean, Randall Alan Pare
  • Publication number: 20090083489
    Abstract: A cache memory logically partitions a cache array having a single access/command port into at least two slices, and uses a first directory to access the first array slice while using a second directory to access the second array slice, but accesses from the cache directories are managed using a single cache arbiter which controls the single access/command port. In one embodiment, each cache directory has its own directory arbiter to handle conflicting internal requests, and the directory arbiters communicate with the cache arbiter. The cache array is arranged with rows and columns of cache sectors wherein a cache line is spread across sectors in different rows and columns, with a portion of the given cache line being located in a first column having a first latency and another portion of the given cache line being located in a second column having a second latency greater than the first latency.
    Type: Application
    Filed: December 1, 2008
    Publication date: March 26, 2009
    Inventors: Leo James Clark, James Stephen Fields, JR., Guy Lynn Guthrie, William John Starke
  • Publication number: 20080126824
    Abstract: A serial communications architecture for communicating between hosts and data store devices. The Storage Link architecture is specially adapted to support communications between multiple hosts and storage devices via a switching network, such as a storage area network. The Storage Link architecture specifies various communications techniques that can be combined to reduce the overall cost and increase the overall performance of communications. The Storage Link architecture may provide packet ordering based on packet type, dynamic segmentation of packets, asymmetric packet ordering, packet nesting, variable-sized packet headers, and use of out-of-band symbols to transmit control information as described below in more detail. The Storage Link architecture may also specify encoding techniques to optimize transitions and to ensure DC-balance.
    Type: Application
    Filed: July 25, 2007
    Publication date: May 29, 2008
    Applicant: Silicon Image, Inc.
    Inventors: Dongyun Lee, Yeshik Shin, David D. Lee, Deog-Kyoon Jeong, Shing Kong
  • Publication number: 20080126739
    Abstract: Methods, apparatus, and products are disclosed for parallel execution of operations for a partitioned binary radix tree that include: receiving, in a parallel computer, an operational entry for the PBRT, the PBRT comprising a plurality of logical pages that contain a plurality of entries, each logical page included in a tier and containing one or more subentries corresponding to the tier of the logical page containing the subentry, each entry is composed of a subentry from each logical page on an entry path; processing in parallel, on the parallel computer, each logical page in each tier, including: identifying a portion of the operational entry that corresponds to the tier of the logical page, and performing an operation on the logical page in dependence upon the identified portion of the operational entry for the tier; and selecting operation results from the logical pages on the entry path for the operational entry.
    Type: Application
    Filed: September 14, 2006
    Publication date: May 29, 2008
    Inventors: Charles J. Archer, Benjamin E. Lynam, Gary R. Ricard