Addressing Cache Memories Patents (Class 711/3)
  • Patent number: 7739478
    Abstract: A method is provided for pre-fetching data into a cache memory. A first cache-line address of each of a number of data requests from at least one processor is stored. A second cache-line address of a next data request from the processor is compared to the first cache-line addresses. If the second cache-line address is adjacent to one of the first cache-line addresses, data associated with a third cache-line address adjacent to the second cache-line address is pre-fetched into the cache memory, if not already present in the cache memory.
    Type: Grant
    Filed: March 8, 2007
    Date of Patent: June 15, 2010
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Judson E. Veazey, Blaine D. Gaither
  • Publication number: 20100122013
    Abstract: A data structure for enforcing consistent per-physical page cacheability attributes is disclosed. The data structure is used with a method for enforcing consistent per-physical page cacheability attributes, which maintains memory coherency within a processor addressing memory, such as by comparing a desired cacheability attribute of a physical page address in a PTE against an authoritative table that indicates the current cacheability status. This comparison can be made at the time the PTE is inserted into a TLB. When the comparison detects a mismatch between the desired cacheability attribute of the page and the page's current cacheability status, corrective action can be taken to transition the page into the desired cacheability state.
    Type: Application
    Filed: January 15, 2010
    Publication date: May 13, 2010
    Inventors: Alexander C. Klaiber, David Dunn
  • Publication number: 20100122012
    Abstract: Systolic networks within a tiled storage array provide for movement of requested values to a front-most tile, while making space for the requested values at the front-most tile by moving other values away. A first and second information pathway provide different linear pathways through the tiles. The movement of other values, requests for values and responses to requests is controlled according to a clocking logic that governs the movement on the first and second information pathways according to a systolic duty cycle. The first information pathway may be a move-to-front network of a spiral cache, crossing the spiral push-back network which forms the push-back network. The systolic duty cycle may be a three-phase duty cycle, or a two-phase duty cycle may be provided if the storage tiles support a push-back swap operation.
    Type: Application
    Filed: December 17, 2009
    Publication date: May 13, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Fadi H. Gebara, Jeremy D. Schaub, Volker Strumpen
  • Patent number: 7712106
    Abstract: A method comprising generating a source chain for use in a development project, generating an identifier which is uniquely assigned to the source chain, and caching the source chain when it is not currently required in the development project. As execution of the development project continues, or during a subsequent project, if the source processing chain is required, it is retrieved from cache.
    Type: Grant
    Filed: December 27, 2005
    Date of Patent: May 4, 2010
    Assignee: Microsoft Corporation
    Inventors: Daniel J. Miller, Eric H. Rudolph
  • Patent number: 7707287
    Abstract: A method and a system for improving Web hosting performance, enhancing content distribution and security on the Internet and stabilizing WEB Site connectivity, by means of creating a TCP terminating buffer around subscriber WEB Sites. A DNS agent diverts client requests to WEB Sites to a Virtual Host Accelerating (VHA) Site in closest proximity. The VHA Site comprises a set of physically identical computer units and processing is enhanced on those units by means of a hardware devise to accelerate database searches. The VHA determines if the client request is of a permitted type and if the request can be processed from recycled data. Both static and dynamic requests can be serviced from recycled material and only in certain circumstances are requests forwarded to the WEB Sites by means of permanent open connections. In some cases SSL requests are also served from recycled material.
    Type: Grant
    Filed: March 22, 2002
    Date of Patent: April 27, 2010
    Assignee: F5 Networks, Inc.
    Inventors: Micha Shafir, Mark Shahaf
  • Patent number: 7698495
    Abstract: A computer system is set forth that includes a processor, general memory storage, and cache memory for temporarily storing selected data requested from the general memory storage. The computer system also may include file system software that is executed by the processor. The file system software may be used to manage the file data and the structure of the file system for files stored on the general memory storage. Management of the cache memory is placed under the control of cache management software. The cache management software is executed by the processor to manage the contents of the cache memory pursuant to cache hit and cache miss criterion. Sections of the cache memory are organized by the cache management software based on logical addresses of file data requested from the general memory storage.
    Type: Grant
    Filed: June 9, 2006
    Date of Patent: April 13, 2010
    Assignee: QNZ Software Systems GmbH & Co. KG
    Inventor: Dan Dodge
  • Patent number: 7698496
    Abstract: A cache miss judger judges a cache miss when a cache access is executed. An entry region judger judges which of a plurality of entry regions constituted with one or a plurality of cache entries in the cache memory is accessed by each of the cache accesses using at least a part of an index for selecting an arbitrary cache line in the cache memory. A cache miss counter counts number of the cache misses judged by the cache miss judger in each of the entry regions that is made to correspond to each of the cache accesses.
    Type: Grant
    Filed: January 31, 2007
    Date of Patent: April 13, 2010
    Assignee: Panasonic Corporation
    Inventor: Genichiro Matsuda
  • Publication number: 20100088457
    Abstract: A cache memory architecture, a method of operating a cache memory and a memory controller. In one embodiment, the cache memory architecture includes: (1) a segment memory configured to contain at least one most significant bit (MSB) of a main memory address, the at least one MSB being common to addresses in a particular main memory logical segment that includes the main memory address, (2) a tag memory configured to contain tags that include other bits of the main memory address and (3) combinatorial logic associated with the segment memory and the tag memory and configured to indicate a cache hit only when both the at least one most significant bit and the other bits match a requested main memory address.
    Type: Application
    Filed: October 3, 2008
    Publication date: April 8, 2010
    Applicant: Agere Systems Inc.
    Inventors: Allen B. Goodrich, Alex Rabinovitch, Assaf Rachlevski, Alex Shinkar
  • Patent number: 7685354
    Abstract: A multiple-core processor providing flexible mapping of processor cores to cache banks. In one embodiment, a processor may include a cache including a number of cache banks. The processor may further include a number of processor cores configured to access the cache banks, as well as core/bank mapping logic coupled to the cache banks and processor cores. The core/bank mapping logic may be configurable to map a cache bank select portion of a memory address specified by a given one of the processor cores to any one of the cache banks.
    Type: Grant
    Filed: February 23, 2005
    Date of Patent: March 23, 2010
    Assignee: Sun Microsystems, Inc.
    Inventors: Ricky C. Hetherington, Manish K. Shah, Gregory F. Grohoski, Bikram Saha
  • Patent number: 7676631
    Abstract: A CPU 3 having a processor 1 and an internal data cache 7 IS operated in combination with a dummy interface 13 which simulates the existence of an external memory 17 having the same address space as the cache memory 7 but which does not store data written to it. In this way, a conventional CPU can be operated without read/write access to an external memory in respect of at least part of its memory address space, and therefore with a higher performance resulting from faster memory access and reduced external memory requirements. The CPU 3 may be one of a set of CPU chips 20, 21 in a data processing system, one or more of those chips 20 optionally having read/write access to an external memory 23.
    Type: Grant
    Filed: August 5, 2002
    Date of Patent: March 9, 2010
    Assignee: Infineon Technologies AG
    Inventors: Taro Kamiko, Pramod Pandey
  • Patent number: 7669018
    Abstract: A method and apparatus for filtering memory probe activity for writes in a distributed shared memory computer. In one embodiment, the method may include initiating a first store operation to a cache data block stored in a first cache from a first processing node including the first cache and assigning a modified cache state to the cache data block in response to initiating the first store operation. The method may further include evicting the cache data block from the first cache subsequent to initiating the first store operation, storing the cache data block in a remote cache in response to the evicting, and assigning a remote directory state to a coherence directory entry corresponding to the cache data block in response to storing the cache data block in the remote cache, where the remote directory state is distinct from an invalid directory state.
    Type: Grant
    Filed: May 12, 2008
    Date of Patent: February 23, 2010
    Assignee: Globalfoundries Inc.
    Inventor: Patrick N. Conway
  • Patent number: 7640397
    Abstract: A memory has multiple memory rows 32 storing respective stored values. The stored values are divided into portions which may be shared by all stored values within the memory rows concerned. When such portions are so shared, then the comparison between an input value and the plurality of stored values can be performed using a base value stored within a base value register 30 rather than by reading the relevant portions of the memory rows. Thus, those relevant portions of the memory rows can be disabled and power saved.
    Type: Grant
    Filed: October 11, 2006
    Date of Patent: December 29, 2009
    Assignee: ARM Limited
    Inventors: Daren Croxford, Timothy Fawcett Milner
  • Patent number: 7636807
    Abstract: A storage apparatus using a nonvolatile memory as a cache and a mapping information recovering method for the storage apparatus are provided. The storage apparatus includes a mapping information storage module which stores in the nonvolatile memory mapping information of the nonvolatile memory and a first physical block address allocated when the mapping information is stored; a scan module which scans the first physical block address through a second physical block address allocated currently; and a mapping information recovery module which recovers the mapping information between the first physical block address and the second physical block address based on a result of the scan by the scan module.
    Type: Grant
    Filed: January 23, 2007
    Date of Patent: December 22, 2009
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Dong-kun Shin, Jang-hwan Kim, Jeong-eun Kim
  • Patent number: 7627734
    Abstract: A “virtual on-chip memory” that provides advantages as compared to an on-chip memory that utilizes a cache. In accordance with the invention, when a CPU attempts to access a memory address that is not on-chip, the access is aborted and the abort is handled at a page level. A single page table is utilized in which each entry constitutes an address in the virtual address space that will be mapped to a page of on-chip memory. The CPU obtains the missing data, updates the page table, and continues execution from the aborted point. Because aborts are handled at the page level rather than the line level, the virtual on-chip memory is less expensive to implement than a cache. Furthermore, critical real-time applications can be stored within a non-virtual portion of the memory space to ensure that they are not stalled.
    Type: Grant
    Filed: January 16, 2007
    Date of Patent: December 1, 2009
    Assignee: Broadcom Corporation
    Inventor: Sophie M. Wilson
  • Publication number: 20090292857
    Abstract: A cache memory unit temporarily stores data having been stored in a main memory, the valid bits of the flag memory corresponding to the lines of the entries at the to-be-invalidated entry addresses are rewritten so as to indicate invalidation of the lines of the entries at the to-be-invalidated entry addresses, so that the lines of the entries at the to-be-invalidated entry addresses are invalidated.
    Type: Application
    Filed: February 23, 2009
    Publication date: November 26, 2009
    Applicant: Kabushiki Kaisha Toshiba
    Inventor: Jun TANABE
  • Patent number: 7606807
    Abstract: A system is provided to improve performance of a storage system. The system comprises a multi-tier buffer cache. The buffer cache may include a global cache to store resources for servicing requests issued from one or more processes at the same time, a free cache to receive resources from the global cache and to store the received resources as free resources, and a local cache to receive free resources from the free cache, the received free resources to store resources that can be accessed by a single process at one time. The system may further include a buffer cache manager to manage transferring resources from the global cache to the free cache and from the free cache to the local cache.
    Type: Grant
    Filed: February 14, 2006
    Date of Patent: October 20, 2009
    Assignee: Network Appliance, Inc.
    Inventors: Jason S. Sobel, Jonathan T. Wall
  • Patent number: 7603510
    Abstract: A semiconductor storage device including a first latch circuit for latching stored data and a storage cell part including a plurality of second latch circuits that operate with inverted logic from the first latch circuit and receives the stored data from the first latch circuit to output the received data using the second latch circuit selected in accordance with a selection signal.
    Type: Grant
    Filed: March 31, 2006
    Date of Patent: October 13, 2009
    Assignee: NEC Electronics Corporation
    Inventor: Satoshi Chiba
  • Patent number: 7596669
    Abstract: The present invention is related to a method and apparatus for managing memory in a network switch, wherein the memory includes the steps of providing a memory, wherein the memory includes a plurality of memory locations configured to store data therein and providing a memory address pool having a plurality of available memory addresses arranged therein, wherein each of the plurality of memory addresses corresponds to a specific memory location. The method further includes the steps of providing a memory address pointer, wherein the memory address pointer indicates a next available memory address in the memory address pool, and reading available memory addresses from the memory address pool using a last in first out operation. The method also includes writing released memory addresses into the memory address pool, adjusting a position of the memory address pointer upon a read or a write operation from the memory address pool.
    Type: Grant
    Filed: May 17, 2005
    Date of Patent: September 29, 2009
    Assignee: Broadcom Corporation
    Inventor: Joseph Herbst
  • Patent number: 7596738
    Abstract: One embodiment of the present invention provides a system that determines the cause of a correctable memory error. First, the system detects a correctable error during an access to a memory location in a main memory by a first processor, wherein the correctable error is detected by error detection and correction circuitry. Next, the system reads tag bits for a cache line associated with the memory location, wherein the tag bits contain address information for the cache line, as well as state information indicating a coherency protocol state for the cache line. The system then tests the memory location by causing the first processor to perform read and write operations to the memory location to produce test results. Finally, the system uses the test results and the tag bits to determine the cause of the correctable error, if possible.
    Type: Grant
    Filed: November 17, 2004
    Date of Patent: September 29, 2009
    Assignee: Sun Microsystems, Inc.
    Inventors: Stephen A. Chessin, Tarik P. Soydan, Louis Y. Tsien
  • Patent number: 7594087
    Abstract: A method and system for accessing a non-volatile memory is disclosed. The method includes writing a first stream of data to a first block of a first region of a non-volatile memory and detecting a full condition of the first block of the first region. Further, the method includes identifying data to be copied from the first block of the first region and copying the identified data from the first block of the first region to a second block of the first region of the non-volatile memory. The method also includes writing a second stream of data to the second block of the first region and writing a third stream of data to a first block of a second region of the non-volatile memory. In addition, the method includes detecting a full condition of the first block of the second region, identifying data to be copied from the first block of the second region and copying the identified data from the first block of the second region to a second block of the second region of the non-volatile memory.
    Type: Grant
    Filed: January 19, 2006
    Date of Patent: September 22, 2009
    Assignee: Sigmatel, Inc.
    Inventors: Josef Zeevi, Grayson Dale Abbott, Richard Sanders, Glenn Reinhardt
  • Publication number: 20090235010
    Abstract: To include an address generating unit that generates a series of access destination addresses at a time of performing a burst access to the external memory, starting from an initial address to be accessed, so that number of inverted bits along with the address change becomes smallest, and a data processing unit that reads data held in a data holding unit and writes the data in an external memory in order of the access destination addresses, or reads data from the external memory in order of the access destination addresses and writes the data in the data holding unit.
    Type: Application
    Filed: March 9, 2009
    Publication date: September 17, 2009
    Applicant: Kabushiki Kaisha Toshiba
    Inventor: Tomoya SUZUKI
  • Patent number: 7590792
    Abstract: It is done to read information containing an address of a memory at which a cache miss is generated, from a cache memory. The numbers of cache misses generated at each cache miss generated address contained in the information are totalized. The cache miss generated addresses whose generated cache miss numbers are totalized are sectionalized by each of the sets. Further, the address group whose numbers of cache miss generated are consistent or close is extracted from a plurality of cache miss generated addresses divided as addresses in the same set.
    Type: Grant
    Filed: September 6, 2006
    Date of Patent: September 15, 2009
    Assignee: Panasonic Corporation
    Inventors: Masumi Yamaga, Takanori Miyashita, Koichi Katou
  • Publication number: 20090222625
    Abstract: A data processing apparatus and method are provided for detecting cache misses. The data processing apparatus has processing logic for executing a plurality of program threads, and a cache for storing data values for access by the processing logic. When access to a data value is required while executing a first program thread, the processing logic issues an access request specifying an address in memory associated with that data value, and the cache is responsive to the address to perform a lookup procedure to determine whether the data value is stored in the cache. Indication logic is provided which in response to an address portion of the address provides an indication as to whether the data value is stored in the cache, this indication being produced before a result of the lookup procedure is available, and the indication logic only issuing an indication that the data value is not stored in the cache if that indication is guaranteed to be correct.
    Type: Application
    Filed: September 13, 2005
    Publication date: September 3, 2009
    Inventors: Mrinmoy Ghosh, Emre Özer, Stuart David Biles
  • Patent number: 7584206
    Abstract: A method and apparatus that, when generating a multimedia file in which encoded data management information is placed at the head of the file, estimates the size needed for the management information in advance and generates a file in which an empty space the size of the estimated size is reserved at the head of the file, with the encoded data directly recorded behind the empty space. If the reserved empty space is insufficient, a new file that reserves a larger empty space is generated and the recorded encoded data is copied and the management information that is ultimately generated is inserted in the head of the file, thus shortening contents creation time and moreover reducing storage space for multimedia contents designed for quick reproduction.
    Type: Grant
    Filed: September 1, 2005
    Date of Patent: September 1, 2009
    Assignee: Canon Kabushiki Kaisha
    Inventors: Tomomi Fukuoka, Masahiko Takaku
  • Patent number: 7581083
    Abstract: As shown in FIG. 1, an operation-processing device of the present invention comprises a register array (11) having plural registers for holding an arbitrary value based on a write address Aw and a write control signal Sw and outputting this value based on a read address Ar, an ALU (12) for performing operations on this value, a decoder (13) for decoding an operation instruction from an operation program AP for operating this ALU (12), and an instruction-execution-controlling portion (50) for controlling the register array (11) and the ALU (12) in order to execute this operation instruction, wherein this instruction-execution-controlling portion (50) selects one of the registers based on the operation instruction and performs register-to-register addressing processing that, based on a value held by this selected register, selects another register.
    Type: Grant
    Filed: March 26, 2003
    Date of Patent: August 25, 2009
    Assignee: Sony Corporation
    Inventor: Tomohisa Shiga
  • Publication number: 20090198865
    Abstract: In at least one embodiment, a method of data processing in a data processing system having a memory hierarchy includes a processor core executing a storage-modifying memory access instruction to determine a memory address. The processor core transmits to a cache memory within the memory hierarchy a storage-modifying memory access request including the memory address, an indication of a memory access type, and, if present, a partial cache line hint signaling access to less than all granules of a target cache line of data associated with the memory address. In response to the storage-modifying memory access request, the cache memory performs a storage-modifying access to all granules of the target cache line of data if the partial cache line hint is not present and performs a storage-modifying access to less than all granules of the target cache line of data if the partial cache line hint is present.
    Type: Application
    Filed: February 1, 2008
    Publication date: August 6, 2009
    Inventors: RAVI K. ARIMILLI, GUY L. GUTHRIE, WILLIAM J. STARKE, DEREK E. WILLIAMS
  • Publication number: 20090187695
    Abstract: Apparatus handles concurrent address translation cache misses and hits under those misses while maintaining command order based upon virtual channel. Commands are stored in a command processing unit that maintains ordering of the commands. A command buffer index is assigned to each address being sent from the command processing unit to an address translation unit. When an address translation cache miss occurs, a memory fetch request is sent. The CBI is passed back to the command processing unit with a signal to indicate that the fetch request has completed. The command processing unit uses the CBI to locate the command and address to be reissued to the address translation unit.
    Type: Application
    Filed: January 12, 2009
    Publication date: July 23, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: John D. Irish, Chad B. McBride, Ibrahim A. Ouda, Andrew H. Wottreng
  • Publication number: 20090172243
    Abstract: In one embodiment, the present invention includes a translation lookaside buffer (TLB) to store entries each having a translation portion to store a virtual address (VA)-to-physical address (PA) translation and a second portion to store bits for a memory page associated with the VA-to-PA translation, where the bits indicate attributes of information in the memory page. Other embodiments are described and claimed.
    Type: Application
    Filed: December 28, 2007
    Publication date: July 2, 2009
    Inventors: David Champagne, Abhishek Tiwari, Wei Wu, Christopher J. Hughes, Sanjeev Kumar, Shih-Lien Lu
  • Patent number: 7555591
    Abstract: The disclosure is directed to a computational system including a processor, cache memory accessible to the processor, and a memory management unit accessible to the processor. The processor is configured to access a virtual memory space to perform a first task and is configured to access the virtual memory space to perform a second task. The virtual memory space references first and second sets of task instructions associated with the first and second tasks, respectively. The virtual memory space references non-instruction data associated with the first task. The cache memory is configured to store the first set of task instructions and the non-instruction data. The memory management unit is configured to determine the physical memory location of the second set of task instructions. The computational system is configured to not write the first set of task instructions and the non-instruction data to a physical location beyond the cache memory.
    Type: Grant
    Filed: April 29, 2005
    Date of Patent: June 30, 2009
    Assignee: Sigmatel, Inc.
    Inventors: Russell Alvin Schultz, David A. Moore, Matthew Henson
  • Publication number: 20090132749
    Abstract: Systems and methods are disclosed for pre-fetching data into a cache memory system. These systems and methods comprise retrieving a portion of data from a system memory and storing a copy of the retrieved portion of data in a cache memory. These systems and methods further comprise monitoring data that has been placed into pre-fetch memory.
    Type: Application
    Filed: September 19, 2008
    Publication date: May 21, 2009
    Applicant: STMicroelectronics (Research & Development) Limited
    Inventors: Andrew Michael Jones, Stuart Ryan
  • Publication number: 20090132750
    Abstract: The present disclosure provides systems and methods for a cache memory and a cache load circuit. The cache load circuit is capable of retrieving a portion of data from the system memory and of storing a copy of the retrieved portion of data in the cache memory. In addition, the systems and methods comprise a monitoring circuit for monitoring accesses to data in the system memory.
    Type: Application
    Filed: September 19, 2008
    Publication date: May 21, 2009
    Applicant: STMicroelectronics (Research & Development) Limited
    Inventors: Andrew Michael Jones, Stuart Ryan
  • Patent number: 7536692
    Abstract: In general, in one aspect, the disclosure describes a processor that includes an instruction store to store instructions of at least a portion of at least one program and multiple engines coupled to the shared instruction store. The engines provide multiple execution threads and include an instruction cache to cache a subset of the at least the portion of the at least one program from the instruction store, with different respective portions of the engine's instruction cache being allocated to different respective ones of the engine threads.
    Type: Grant
    Filed: November 6, 2003
    Date of Patent: May 19, 2009
    Assignee: Intel Corporation
    Inventors: Sridhar Lakshmanamurthy, Wilson Y. Liao, Prashant R. Chandra, Jeen-Yuan Miin, Yim Pun
  • Patent number: 7529878
    Abstract: The disclosure is directed to a computational system including a processor and a memory management unit accessible to the processor. The processor is configured to access a common virtual memory space to perform a first task of a plurality of tasks and is configured to access the common virtual memory space to perform a second task of the plurality of tasks. The common virtual memory space references a first set of instructions associated with the first task and references a second set of instructions associated with the second task. The memory management unit is configured to determine a physical memory location of at least one of the first and second sets of instructions when the associated first or second task is to be performed by the processor.
    Type: Grant
    Filed: April 29, 2005
    Date of Patent: May 5, 2009
    Assignee: Sigmatel, Inc.
    Inventors: Russell Alvin Schultz, David A. Moore, Matthew Henson
  • Patent number: 7526610
    Abstract: A memory cache comprising, a data sector having a sector ID, wherein the data sector stores a data entry, a primary directory having a primary directory entry, wherein a position of the primary directory entry is defined by a congruence class value and a way value, and a secondary directory corresponding to the data sector having a secondary directory entry corresponding to the data sector, wherein the secondary directory entry include, a primary ID field corresponding to the way value, and a sector ID field operative to identify the sector ID.
    Type: Grant
    Filed: March 20, 2008
    Date of Patent: April 28, 2009
    Assignee: International Business Machines Corporation
    Inventors: Philip G. Emma, Robert K. Montoye, Vijayalakshmi Srinivasan
  • Patent number: 7519470
    Abstract: A location-based caching system provides the ability for a mobile communication device to dynamically provide content related to a user's location. Content may comprise a series of map segments that anticipate the route traveled by a user of the mobile device. Other related content may also be provided, for example, point of interest information related to the route traveled. The system tracks a present location of the mobile device and predicts a future location of the mobile device. Based upon the prediction of future location, the caching module determines whether content related to the future location is presently stored on the mobile device. If appropriate content is not on the mobile device, the caching module retrieves the content from a content server via a network connection. The content information nay be contextually selected based upon, for example, user preferences, movement information, and device state information.
    Type: Grant
    Filed: March 15, 2006
    Date of Patent: April 14, 2009
    Assignee: Microsoft Corporation
    Inventors: Goetz P Brasche, Robert Fesl, Wolfgang Manousek, Ivo W Salmre
  • Patent number: 7512837
    Abstract: A method for recovering lost cache capacity in a multi core chip having at least one defective core including identifying the cores contained in the chip that are viable cores and identifying at least one core contained in the chip that is defective. The method also includes identifying the cache memory local to the defective core and determining a redistribution of the cache resources local to the at least one defective core among the viable cores. The method also features dividing the cache memory local to the at least one defective core according to the redistribution determination and determining the address information associated with the cache memory local to the at least one defective core. The method also features providing the address information associated with the cache memory associated with the defective core to at least one of the viable cores, facilitating the supplementation of the cache memory local to the viable cores with the cache memory associated with the defective core.
    Type: Grant
    Filed: April 4, 2008
    Date of Patent: March 31, 2009
    Assignee: International Business Machines Corporation
    Inventors: Diane Flemming, Ghadier R. Gholami, Octavian F. Herescu, William A. Maron, Mysore M. Srinivas
  • Patent number: 7505168
    Abstract: Various distributed client side printing methods and systems are described. In at least some embodiments, printer control and the job handling functions are bifurcated or separated by allowing jobs, drivers, settings and the like to be controlled by the print server or some other entity, while allowing clients to communicate jobs directly to a printer. The various embodiments can improve server performance by performing rendering and maintaining rendered data on the client side, and not communicating the rendered data to the server, while at the same time, allowing print servers to control the printer.
    Type: Grant
    Filed: December 30, 2004
    Date of Patent: March 17, 2009
    Assignee: Microsoft Corporation
    Inventors: Andrew R. Simonds, Mark A. Lawrence, Timothy J. Lytle
  • Publication number: 20090049226
    Abstract: Deleting a data volume from a storage system and freeing its storage space to make it available to be allocated to a new volume is accomplished by only zeroing associated metadata for the tracks contained in the freed storage space which is then reused in a new volume allocation and an attempt is made by the new volume to read a first record R0 of a track. A determination is made as to whether a first user record R1 of the volume is stale If the first record R0 is stale. If record R1 is stale, the metadata or track format description (TFD) is modified whereby the entire track is indicated as being uninitialized and the first record R0 is uninitialized. If record R1 is not stale, the first record R0 is regenerated and the TFD is modified whereby the entire track is indicated as being initialized.
    Type: Application
    Filed: August 13, 2007
    Publication date: February 19, 2009
    Applicant: IBM CORPORATION
    Inventors: Susan K. Candelaria, Kurt A. Lovrien, James D. Marwood, JR., Beth A. Peterson, Kenneth W. Todd
  • Patent number: 7490200
    Abstract: A cache memory logically partitions a cache array having a single access/command port into at least two slices, and uses a first cache directory to access the first cache array slice while using a second cache directory to access the second cache array slice, but accesses from the cache directories are managed using a single cache arbiter which controls the single access/command port. In the illustrative embodiment, each cache directory has its own directory arbiter to handle conflicting internal requests, and the directory arbiters communicate with the cache arbiter. An address tag associated with a load request is transmitted from the processor core with a designated bit that associates the address tag with only one of the cache array slices whose corresponding directory determines whether the address tag matches a currently valid cache entry.
    Type: Grant
    Filed: February 10, 2005
    Date of Patent: February 10, 2009
    Assignee: International Business Machines Corporation
    Inventors: Leo James Clark, James Stephen Fields, Jr., Guy Lynn Guthrie, William John Starke
  • Patent number: 7480759
    Abstract: A cascaded interconnect system including a memory controller, one or more memory modules, an upstream memory bus and a downstream memory bus. The one or more memory modules include a first memory module with cache data. The memory modules and the memory controller are interconnected by a packetized multi-transfer interface via the downstream memory bus and the upstream memory bus. The first memory module and the memory controller are in direct communication via the upstream memory bus and the downstream memory bus.
    Type: Grant
    Filed: July 3, 2007
    Date of Patent: January 20, 2009
    Assignee: International Business Machines Corporation
    Inventors: Kevin C. Gower, Mark W. Kellogg, Warren E. Maule, Thomas B. Smith, III, Robert B. Tremaine
  • Patent number: 7478202
    Abstract: Described is a technique for maintaining local cache coherency between endpoints using the connecting message fabric. Processors in a data storage system communicate using the message fabric. Each processor is an endpoint having its own local cache storage in which portions of global memory may be locally cached. A write through caching technique is described. Each local cache line of data of each processor is either in an invalid or a shared state. When a write to global memory is performed by a processor (write miss or a write hit), the following are performed atomically: the global memory is updated, other processor's local cache lines of the data are invalidated, verification of invalidation is received by the processor, and the processor's local copy is updated. Other processors' cache lines are invalidated by transmission of an invalidate command by the processor. A processor updates its local cache lines upon the next read miss or write miss of the updated cacheable global memory.
    Type: Grant
    Filed: October 4, 2006
    Date of Patent: January 13, 2009
    Assignee: EMC Corporation
    Inventors: Brett D. Niver, Steven R. Chalmer, Steven T. McClure
  • Patent number: 7475219
    Abstract: In one embodiment, the present invention includes a method of accessing a cache memory to determine whether requested data is present. In this embodiment, the method may include indexing a cache with a first index corresponding to a first memory region size, and indexing the cache with a second index corresponding to a second memory region size. The second index may be used if the requested data is not found using the first index.
    Type: Grant
    Filed: August 27, 2004
    Date of Patent: January 6, 2009
    Assignee: Marvell International Ltd.
    Inventors: Dennis M. O'Connor, Stephen J. Strazdus
  • Patent number: 7475093
    Abstract: Caching architecture that facilitates translation between schema data and relational structures. A schema translation component consumes schema data (e.g., XML schema data) having a schema structure. The schema structure is shredded into tables. A validation component interfaces to a memory management interface to facilitate loading only the necessary components to perform instance validation. During validation, only parts of the schema that are used are loaded and cached. A schema cache stores the in-memory representation of the schema optimized for instance validation. The schema components are loaded from metadata into the cache memory as read-only objects such that multiple users can use the in-memory objects for validation.
    Type: Grant
    Filed: July 20, 2005
    Date of Patent: January 6, 2009
    Assignee: Microsoft Corporation
    Inventors: Dragan Tomic, Shankar Pal, Stanislav A. Oks, Jonathan D. Morrison, Mark C. Benvenuto
  • Patent number: 7472218
    Abstract: A system and method for recording trace data while conserving cache resources includes generating trace data and creating a cache line containing the trace data. The cache line is assigned a tag which corresponds to an intermediate address designated for processing the trace data. The cache line also contains embedded therein an actual address in memory for storing the trace data, which may include either a real address or a virtual address. The cache line may be received at the intermediate address and parsed to read the actual address. The trace data may then be written to a location in memory corresponding to the actual address. By routing trace data through a designated intermediate address, CPU cache may be conserved for other more important or more frequently accessed data.
    Type: Grant
    Filed: September 8, 2006
    Date of Patent: December 30, 2008
    Assignee: International Business Machines Corporation
    Inventors: Carol Spanel, Andrew Dale Walls
  • Patent number: 7469318
    Abstract: A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses. The first data bus can be one of a plurality of data busses in a first data bus set, and the second data bus can be one of a plurality of data busses in a second data bus set.
    Type: Grant
    Filed: February 10, 2005
    Date of Patent: December 23, 2008
    Assignee: International Business Machines Corporation
    Inventors: Vicente Enrique Chung, Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli
  • Patent number: 7467377
    Abstract: Methods and apparatus to manage bypassing of a first cache are disclosed. In one such method, a load instruction having an expected latency greater than or equal to a predetermined threshold is identified. A request is then made to schedule the identified load instruction to have a predetermined latency. The software program is then scheduled. An actual latency associated with the load instruction in the scheduled software program is then compared to the predetermined latency. If the actual latency is greater than or equal to the predetermined latency, the load instruction is marked to bypass the first cache.
    Type: Grant
    Filed: October 22, 2002
    Date of Patent: December 16, 2008
    Assignee: Intel Corporation
    Inventors: Youfeng Wu, Li-Ling Chen
  • Patent number: 7464240
    Abstract: A solid-state disk drive includes a first portion of solid-state memory of a volatile nature, a second portion of solid-state memory of a non-volatile nature, a controller for managing the memories, and a power subsystem for protecting data in volatile memory in the event of loss of power.
    Type: Grant
    Filed: May 23, 2006
    Date of Patent: December 9, 2008
    Assignee: Data Ram, Inc.
    Inventors: Jason Caulkins, Michael Richard Beyer
  • Patent number: 7464181
    Abstract: The classification system of a network device includes a cache in which a mapping between predefined characteristics of TCP/IP packets and associated actions are stored in response to the first “Frequent Flyer” packet in of a session. Selected characteristics from subsequent received packets of that session are correlated with the predefined characteristics and the stored actions are applied to the received packets if the selected characteristics and the predefined characteristics match, thus reducing the processing required for subsequent packets. The packets selected for caching may be data packets. For mismatched characteristics, the full packet search of the classification system is used to determine the action to apply to the received packet.
    Type: Grant
    Filed: September 11, 2003
    Date of Patent: December 9, 2008
    Assignee: International Business Machines Corporation
    Inventors: Everett A. Corl, Jr., Gordon T. Davis, Clark D. Jeffries, Natarajan Vaidhyanathan, Colin B. Verrilli
  • Patent number: 7461229
    Abstract: A machine-readable medium is provided having stored thereon a set of instructions that cause a controller of solid-state disk having a first portion of solid-state memory of a volatile nature and a second portion of solid-state memory of a non-volatile nature to perform a method including (a) receiving at the controller, write data for writing to an assigned address in non-volatile memory, (b) determining at the controller if there is existing data associated with a write address in volatile memory, the write address referencing the assigned address, and (c) upon finding data in volatile memory held for the assigned write address or not at act (b), writing the data into the volatile memory at a predestinated write address in volatile memory.
    Type: Grant
    Filed: May 23, 2006
    Date of Patent: December 2, 2008
    Assignee: Dataram, Inc.
    Inventors: Jason Caulkins, Michael Richard Beyer
  • Publication number: 20080282019
    Abstract: According to one aspect, a portable communication device records a program being presented by a media presenting apparatus as media data, generates a query regarding a media channel and a program on that channel, which query includes said media data and sends said query to a system for determining a program on a media channel operated by a program determination service provider. The system receives the query, compares the query media data with data of a number of sets of reference media data related to at least one reception environment, where each set corresponds to a broadcast media channel, identifies the media channel, identifies a program in the media channel through using an electronic program guide, and sends data identifying the channel and the program to the portable communication device.
    Type: Application
    Filed: May 11, 2007
    Publication date: November 13, 2008
    Applicant: SONY ERICSSON MOBILE COMMUNICATIONS AB
    Inventor: Staffan LINCOLN