Addressing Cache Memories Patents (Class 711/3)
  • Patent number: 7120152
    Abstract: A method of routing a packet in a routing device having a main processor that includes a main cache table and an instant cache table is disclosed. The instant cache stores a recent address and a recent interface associated with the most recent packet transmission process made by the routing device. The method includes the steps of receiving a packet that includes its destination address, checking whether the destination address belongs to the routing device, checking whether the destination address is identical to the recent address if the destination address does not belong to the routing device, and transmitting the packet to the recent interface if the destination address is identical to the recent address. As a result, the core information related to the routing path determination is stored not only in the routing table of the protocol layer but also in the main and instant cache tables included in the main processor.
    Type: Grant
    Filed: December 27, 2001
    Date of Patent: October 10, 2006
    Assignee: LG Electronics Inc.
    Inventor: Sung Uk Park
  • Patent number: 7117290
    Abstract: A processor comprises a cache, a first TLB, and a tag circuit. The cache comprises a data memory storing a plurality of cache lines and a tag memory storing a plurality of tags. Each of the tags corresponds to a respective one of the cache lines. The first TLB stores a plurality of page portions of virtual addresses identifying a plurality of virtual pages for which physical address translations are stored in the first TLB. The tag circuit is configured to identify one or more of the plurality of cache lines that are stored in the cache and are within the plurality of virtual pages. In response to a hit by a first virtual address in the first TLB and a hit by the first virtual address in the tag circuit, the tag circuit is configured to prevent a read of the tag memory in the cache.
    Type: Grant
    Filed: September 3, 2003
    Date of Patent: October 3, 2006
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Gene W. Shen, S. Craig Nelson
  • Patent number: 7089397
    Abstract: A method for caching attribute data for matching attributes with physical addresses. The method includes storing a plurality of attribute entries in a memory, wherein the memory is configured to provide at least one attribute entry when accessed with a physical address, and wherein the attribute entry provided describes characteristics of the physical address.
    Type: Grant
    Filed: July 3, 2003
    Date of Patent: August 8, 2006
    Assignee: Transmeta Corporation
    Inventors: H. Peter Anvin, Guillermo J. Rozas, Alexander Klaiber, John P. Banning
  • Patent number: 7089376
    Abstract: In a system having a plurality of snooping masters coupled to a Bus Macro, a snoop filtering device and method are provided in at least one of the plurality of snooping masters. The snoop filtering device and method parse a snoop request issued by one of the plurality of snooping masters and return an Immediate Response if parsing indicates the requested data cannot possibly be contained in a responding snooping master. If parsing indicates otherwise the at least one plurality of snoop masters searches its resources and returns the requested data if marked updated.
    Type: Grant
    Filed: May 21, 2003
    Date of Patent: August 8, 2006
    Assignee: International Business Machines Corporation
    Inventors: James N. Dieffenderfer, Bernard C. Drerup, Jaya P. Ganasan, Richard G. Hofmann, Thomas A. Sartorius, Thomas P. Speier, Barry J. Wolford
  • Patent number: 7086053
    Abstract: Methods and apparatus for enabling inconsistent or unsafe threads to efficiently reach a consistent or safe state when a requesting thread requests a consistent state are disclosed. According to one aspect of the present invention, a method for requesting a consistent state in a multi-threaded computing environment using a first thread includes acquiring a consistent state lock using the first thread, and identifying substantially all threads in the environment that are inconsistent. The state of the inconsistent threads is altered to a consistent state, and the first thread is notified when the states of the previously inconsistent threads have been altered to be consistent. Once the first thread is notified, the first thread releases the consistent state lock. In one embodiment, the method also includes performing a garbage collection after releasing the consistent state lock using the first thread.
    Type: Grant
    Filed: April 17, 2001
    Date of Patent: August 1, 2006
    Assignee: Sun Microsystems, Inc.
    Inventors: Dean R. E. Long, Nedim Fresko
  • Patent number: 7085885
    Abstract: A cache memory that notifies other functional blocks in the microprocessor that a miss has occurred potentially N clocks sooner than the conventional method, where N is the number of stages in the cache pipeline. The multiple pass cache receives a plurality of busy indicators from resources needed to complete various transaction types. The cache distinguishes between a first set of resources needed to complete a transaction when its cache line address hits in the cache and a second set of resources needed to complete the transaction type when the address misses in the cache. If none of the second set of resources for the type of the transaction type is busy on a miss, then the cache immediately signals a miss rather than retrying the transaction by sending it back through the cache pipeline and causing N additional clock cycles to occur before signaling the miss.
    Type: Grant
    Filed: October 7, 2002
    Date of Patent: August 1, 2006
    Inventor: James N. Hardage, Jr.
  • Patent number: 7080213
    Abstract: A system and method for reducing shared memory write overhead in multiprocessor system. In one embodiment, a multiprocessing system implements a method comprising storing an indication of obtained store permission corresponding to a particular address in a store buffer. The indication may be, for example, the address of a cache line for which a write permission has been obtained. Obtaining the write permission may include locking and modifying an MTAG or other coherence state entry. The method further comprises determining whether the indication of obtained store permission corresponds to an address of a write operation to be performed. In response to the indication corresponding to the address of the write operation to be performed, the write operation is performed without invoking corresponding global coherence operations.
    Type: Grant
    Filed: December 16, 2002
    Date of Patent: July 18, 2006
    Assignee: Sun Microsystems, Inc.
    Inventors: Oskar Grenholm, Zoran Radovic, Erik E. Hagersten
  • Patent number: 7076609
    Abstract: Cache sharing for a chip multiprocessor. In one embodiment, a disclosed apparatus includes multiple processor cores, each having an associated cache. A control mechanism is provided to allow sharing between caches that are associated with individual processor cores.
    Type: Grant
    Filed: September 20, 2002
    Date of Patent: July 11, 2006
    Assignee: Intel Corporation
    Inventors: Vivek Garg, Jagannath Keshava
  • Patent number: 7073026
    Abstract: A microprocessor including a level two cache memory which supports multiple accesses per cycle. The microprocessor includes an execution unit coupled to a cache memory subsystem which includes a cache memory coupled to a plurality of buses. The cache memory includes a plurality of independently accessible storage blocks. The buses may be coupled to convey a plurality of cache access requests to each of the storage blocks. In response to the plurality of cache access requests being conveyed on the plurality of cache buses, different ones of the storage blocks are concurrently accessible.
    Type: Grant
    Filed: November 26, 2002
    Date of Patent: July 4, 2006
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Mitchell Alsup
  • Patent number: 7072986
    Abstract: A management display method according to each type of interfaces and devices is provided in an environment where host computers are interconnected with storage apparatuses through plural types of interfaces. The management host computer includes a display apparatus and allows a user to select a physical view for displaying a physical topology between each host and storage subsystems or a logical view for displaying a connecting relation between the devices of the storage subsystem and each host computer. The management host computer operates to collect the information of the Fibre channel interface and the Ethernet interface and the information about an access limitation of each device, included in each host computer and storage subsystem and then to display the connecting relation according to the display method (view) selected by the user, based on the collected information.
    Type: Grant
    Filed: February 8, 2002
    Date of Patent: July 4, 2006
    Assignee: Hitachi, Ltd.
    Inventors: Manabu Kitamura, Kenichi Takamoto
  • Patent number: 7069387
    Abstract: A method for optimizing a cache memory used for multitexturing in a graphics system is implemented. The graphics system comprises a texture memory, which stores texture data comprised in texture maps, coupled to a texture cache memory. Active texture maps for an individual primitive, for example a triangle, are identified, and the texture cache memory is divided into partitions. In one embodiment, the number of texture cache memory partitions equals the number of active texture maps. Each texture cache memory partition corresponds to a respective single active texture map, and is operated as a direct mapped cache for its corresponding respective single active texture map. In one embodiment, each texture cache memory partition is further operated as an associative cache for the texture data comprised in the partition's corresponding respective single active texture map. The cache memory is dynamically re-configured for each primitive.
    Type: Grant
    Filed: March 31, 2003
    Date of Patent: June 27, 2006
    Assignee: Sun Microsystems, Inc.
    Inventor: Brian D. Emberling
  • Patent number: 7069380
    Abstract: In order to manage the various types of attribute information within the storage-device system, the storage-device system includes the following databases within a file-access controlling memory: a database for managing index information for managing contents of the files, and an index retrieval program, a database for managing the attribute information on the files, and a database for managing storage positions of blocks configuring a file. When the storage-device system receives an access request to a file, the utilization of these databases allows the storage-device system to make the access to the access-target file.
    Type: Grant
    Filed: September 4, 2003
    Date of Patent: June 27, 2006
    Assignee: Hitachi, Ltd.
    Inventors: Junji Ogawa, Naoto Matsunami, Masaaki Iwasaki, Koji Sonoda, Kenichi Tsukiji
  • Patent number: 7058852
    Abstract: The present invention discloses a method and system for providing defect management of a bulk data storage media wherein logical addresses of media data blocks are continuously slipped to omit all media data blocks determined to be defective at the time of an initial media format. Thereafter, selectable parameters are utilized to define a logical zone including both a user data area and corresponding replacement data area on the media such that proper selection of the parameters provides defect management optimized for a particular use of the media.
    Type: Grant
    Filed: January 2, 2001
    Date of Patent: June 6, 2006
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: J. Robert Sims, III, Kyle Way
  • Patent number: 7058784
    Abstract: A method for managing the access procedure for large block flash memory by employing a page cache block, so as to reduce the occurrence of swap operation is proposed. At least one block of the nonvolatile memory is used as a page cache block. When a host requests to write a data to storage device, the last page of the data is written into one available page of the page cache block by the controller. A block structure is defined in the controller having a data block for storing original data, a writing block for temporary data storage in the access operation, and a page cache block for storing the last one page data to be written.
    Type: Grant
    Filed: July 4, 2003
    Date of Patent: June 6, 2006
    Assignee: Solid State System Co., Ltd.
    Inventor: Chih-Hung Wang
  • Patent number: 7047364
    Abstract: Management of accessing data in a main memory and a cache memory includes, for each unit of data transferred from a first processor to a second processor, filling a cache set of the cache memory with data associated with addresses in the main memory that correspond to the cache set after the first processor writes a unit of data to addresses that correspond to the cache set. For each unit of data transferred from the second processor to the first processor, filling the cache set with data associated with addresses in the main memory that correspond to the cache set before the first processor reads a unit of data written by the second processor to addresses that correspond to the cache set. The data used to fill the cache set are associated with addresses that are different from the addresses associated with the unit of data.
    Type: Grant
    Filed: December 29, 2003
    Date of Patent: May 16, 2006
    Assignee: Intel Corporation
    Inventor: Mehdi M. Hassane
  • Patent number: 7039761
    Abstract: A system for performing caching procedures in an electronic network may include a user device that communicates with a server device in the electronic network for transmitting message information to a selected buddy device in the electronic network. A messaging application of the user device may temporarily store the message information into a cache device as a cached message until a successful transmission of the cached message to the buddy device becomes possible through the server device and over the electronic network.
    Type: Grant
    Filed: August 11, 2003
    Date of Patent: May 2, 2006
    Assignees: Sony Corporation, Sony Electronics Inc.
    Inventors: Annie Wang, Steven Kennedy, Sho San Kou
  • Patent number: 7039751
    Abstract: A plurality of cache addressing functions are stored in main memory. A processor which executes a program selects one of the stored cache addressing functions for use in a caching operation during execution of a program by the processor.
    Type: Grant
    Filed: June 4, 2004
    Date of Patent: May 2, 2006
    Assignee: Micron Technology, Inc.
    Inventor: Ole Bentz
  • Patent number: 7032123
    Abstract: The present invention provides a method and apparatus for error recovery in a system. The apparatus comprises a directory cache adapted to store at least one entry and a control unit. The control unit is adapted to determine if at least one uncorrectable error exists in the directory cache and to place the directory cache offline in response to determining that the error is uncorrectable. The method comprises detecting an error in data stored in a storage device in the system, and determining if the detected error is correctable. The method further comprises making at least a portion of the storage device unavailable to one or more resources in the system in response to determining that the error is uncorrectable.
    Type: Grant
    Filed: October 19, 2001
    Date of Patent: April 18, 2006
    Assignee: Sun Microsystems, Inc.
    Inventors: Donald Kane, Daniel P. Drogichen
  • Patent number: 7032067
    Abstract: This invention provides a system and method for implementing a middleware caching arrangement to minimize device contention, network performance and synchronization issues associated with enterprise security token usage. The invention comprises a token API mapped to a cache API. Logic associated with the token API preferentially retrieves information from a memory cache managed by the cache API. Mechanisms are included to periodically purge the memory cache of unused information.
    Type: Grant
    Filed: December 17, 2002
    Date of Patent: April 18, 2006
    Assignee: Activcard
    Inventor: Yves Massard
  • Patent number: 7027063
    Abstract: A method of storing a texel in a texel cache comprising reading a t coordinate of the texel, the t coordinate comprising a plurality of bits, reading a s coordinate of the texel, the s coordinate comprising a plurality of bits, forming an offset by concatenating bits of the t coordinate with bits of the s coordinate and forming an index by concatenating bits of the t coordinate with bits of the s coordinate and at least one bit of a level of detail is discussed.
    Type: Grant
    Filed: May 6, 2005
    Date of Patent: April 11, 2006
    Assignee: NVIDIA Corporation
    Inventor: Alexander L. Minkin
  • Patent number: 7023741
    Abstract: In a semiconductor integrated circuit, an internal circuit is capable of executing a first operation and a second operation concurrently, and an output circuit outputs to the outside of the semiconductor integrated circuit information indicating whether or not the first operation is being executed and information indicating whether or not the second operation is executable.
    Type: Grant
    Filed: December 13, 2002
    Date of Patent: April 4, 2006
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Hiroshi Nakamura, Kenichi Imamiya, Toshio Yamamura, Koji Hosono, Koichi Kawai
  • Patent number: 7024452
    Abstract: A method and system for file-system based caching can be used to improve efficiency and security at network sites. In one set of embodiments, the delivery of content and storing content component(s) formed during generation of the content may be performed by different software components. Content that changes at a relatively high frequency or is likely to be regenerated between requests may not have some or all of its corresponding files cached. Additionally, extra white space may be removed before storing to reduce the file size. File mapping may be performed to ensure that a directory within the cache will have an optimal number of files. Security at the network site may be increased by using an internally generated filename that is not used or seen by the client computer. Many variations may be used is achieving any one or more of the advantages described herein.
    Type: Grant
    Filed: July 15, 2002
    Date of Patent: April 4, 2006
    Assignee: Vignette Corporation
    Inventors: Conleth S. O'Connell, Jr., Maxwell J. Berenson, N. Issac Rajkumar
  • Patent number: 7007031
    Abstract: System and method of data unit management in a decoding system employing a decoding pipeline. Each incoming data unit is assigned a memory element and is stored in the assigned memory element. Each decoding module gets the data to be operated on, as well as the control data, for a given data unit from the assigned memory element. Each decoding module, after performing its decoding operations on the data unit, deposits the newly processed data back into the same memory element. In one embodiment, the assigned memory locations comprise a header portion for holding the control data corresponding to the data unit and a data portion for holding the substantive data of the data unit. The header information is written to the header portion of the assigned memory element once and accessed by the various decoding modules throughout the decoding pipeline as needed. The data portion of memory is used/shared by multiple decoding modules.
    Type: Grant
    Filed: April 1, 2002
    Date of Patent: February 28, 2006
    Assignee: Broadcom Corporation
    Inventors: Alexander G. MacInnis, Jose′ R. Alvarez, Sheng Zhong, Xiaodong Xie, Vivian Hsiun
  • Patent number: 6990551
    Abstract: A system and method for reducing linear address aliasing is described. In one embodiment, a portion of a linear address is combined with a process identifier, e.g., a page directory base pointer to form an adjusted-linear address. The page directory base pointer is unique to a process and combining it with a portion of the linear address produces an adjusted-linear address that provides a high probability of no aliasing. A portion of the adjusted-linear address is used to search an adjusted-linear-addressed cache memory for a data block specified by the linear address. If the data block does not reside in the adjusted-linear-addressed cache memory, then a replacement policy selects one of the cache lines in the adjusted-linear-addressed cache memory and replaces the data block of the selected cache line with a data block located at a physical address produced from translating the linear address.
    Type: Grant
    Filed: August 13, 2004
    Date of Patent: January 24, 2006
    Assignee: Intel Corporation
    Inventors: Herbert H. J. Hum, Stephan J. Jourdan, Per H. Hammarlund
  • Patent number: 6977657
    Abstract: A data processing system has main memory and one or more caches. Data from main memory is cached while mitigating the effects of address pattern dependency. Main memory physical addresses are translated into main memory virtual address under the control of an operating system. The translation occurs on a page-by-page basis such that some of the virtual address bits are the same as some of the physical address bits. A portion of the address bits that are the same are selected and cache offset values are generated from the selected portion. Data is written to the cache at offset positions derived from the cache offset values.
    Type: Grant
    Filed: August 8, 2002
    Date of Patent: December 20, 2005
    Assignee: Autodesk Canada Co.
    Inventor: Benoit Belley
  • Patent number: 6976117
    Abstract: A processor system having cache array for storing virtual tag information and physical tag information and corresponding comparators associated with the array to determine cache-hits. Information from the virtual tag array and the physical tag array may be accessed together.
    Type: Grant
    Filed: August 13, 2002
    Date of Patent: December 13, 2005
    Assignee: Intel Corporation
    Inventors: Lawrence T. Clark, Dan W. Patterson, Stephen J. Strazdus
  • Patent number: 6965962
    Abstract: A computer implemented method of managing processor requests to load data items provides for the classification of the requests based on the type of data being loaded. In one approach, a pointer cache is used, where the pointer cache is dedicated to data items that contain pointers. In other approaches, the cache system replacement scheme is modified to age pointer data items more slowly than non-pointer data items. By classifying load requests, cache misses on pointer loads can be overlapped regardless of whether the pointer loads are part of a linked list of data structures.
    Type: Grant
    Filed: December 17, 2002
    Date of Patent: November 15, 2005
    Assignee: Intel Corporation
    Inventor: Gad S. Sheaffer
  • Patent number: 6963823
    Abstract: Design spaces for systems, including hierarchical systems, are programmatically validity filtered and quality filtered to produce validity sets and quality sets, reducing the number of designs to be evaluated in selecting a system design for a particular application. Validity filters and quality filters are applied to both system designs and component designs. Component validity sets are combined as Cartesian products to form system validity sets that can be further validity filtered. Validity filters are defined by validity predicates that are functions of discrete system parameters and that evaluate as TRUE for potentially valid systems. For some hierarchical systems, the system validity predicate is a product of component validity predicates. Quality filters use an evaluation metric produced by an evaluation function that permits comparing designs and preparing a quality set of selected designs. In some cases, the quality set is a Pareto set or an approximation thereof.
    Type: Grant
    Filed: February 10, 2000
    Date of Patent: November 8, 2005
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Santosh G. Abraham, Robert S. Schreiber, B. Ramakrishna Rau
  • Patent number: 6961280
    Abstract: Techniques are provided for recycling addresses in memory blocks. Address signals in memory blocks are stored temporarily in a set of parallel coupled address registers. The address registers transfer the address signals to an address decoder block, which decodes the address signals. The address decoder block transfers the decoded addresses to a memory array. A stall state occurs when the cache memory block needs a new set of data to replace the old set of data. Address signals are stored in the address registers during the stall state by coupling each register's output to its data input using a series of multiplexers. The multiplexers are controlled by an address stall signal that indicates the onset and the end of a stall state. After the end of a stall state, the address registers store the next address signal received at the memory block.
    Type: Grant
    Filed: December 8, 2003
    Date of Patent: November 1, 2005
    Assignee: Altera Corporation
    Inventors: Philip Pan, Chiakang Sung, Joseph Huang, Yan Chong, Johnson Tan
  • Patent number: 6961804
    Abstract: Caches are associated with processors, such multiple caches may be associated with multiple processors. This association may be different for different main memory address ranges. The techniques of the invention are flexible, as a system designer can choose how the caches are associated with processors and main memory banks, and the association between caches, processors, and main memory banks may be changed while the multiprocessor system is operating. Cache coherence may or may not be maintained. An effective address in an illustrative embodiment comprises an interest group and an associated address. The interest group is an index into a cache vector table and an entry into the cache vector table and the associated address is used to select one of the caches. This selection can be pseudo-random. Alternatively, in some applications, the cache vector table may be eliminated, with the interest group directly encoding the subset of caches to use.
    Type: Grant
    Filed: June 28, 2002
    Date of Patent: November 1, 2005
    Assignee: International Business Machines Corporation
    Inventors: Monty Montague Denneau, Peter Heiner Hochschild, Henry Stanley Warren, Jr.
  • Patent number: 6961822
    Abstract: Free memory can be managed by creating a free list having entries with address of free memory location. A portion of this free list can then be cached in a cache that includes an upper threshold and a lower threshold. Additionally, a plurality of free lists are created for a plurality of memory banks in a plurality of memory channels. A free list is created for each memory bank in each memory channel. Entries from these free lists are written to a global cache. The entries written to the global cache are distributed between the memory channels and memory banks.
    Type: Grant
    Filed: August 27, 2003
    Date of Patent: November 1, 2005
    Assignee: Redback Networks Inc.
    Inventors: Ranjit J. Rozario, Ravikrishna Cherukuri
  • Patent number: 6958757
    Abstract: The method of one embodiment for the invention is for the CPU to read a subset of consecutive pixels from RAM and cache each such pixel in the WC Cache (and load corresponding blocks into the L2 Cache). These reads and loads continue until the capacity of the L2 Cache is reached, and then these blocks (a “band”) are iteratively processed until the entire band in the L2 Cache has been written to the frame buffer via the WC Cache. Once this is complete, the process then “dumps” the L2 Cache (that is, it ignores the existing blocks and allows them to be naturally pushed out with subsequent loads) and the next band of consecutive pixels is read (and their blocks loaded). This process continues until the portrait-oriented graphic is entirely loaded.
    Type: Grant
    Filed: July 18, 2003
    Date of Patent: October 25, 2005
    Assignee: Microsoft Corporation
    Inventor: Donald David Karlov
  • Patent number: 6957360
    Abstract: The present invention discloses a method and system for providing defect management of a bulk data storage media wherein logical addresses of media data blocks are continuously slipped to omit all media data blocks determined to be defective at the time of an initial media format. Thereafter, selectable parameters are utilized to define a logical zone including both a user data area and corresponding replacement data area on the media such that proper selection of the parameters provides defect management optimized for a particular use of the media.
    Type: Grant
    Filed: March 14, 2003
    Date of Patent: October 18, 2005
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: J. Robert Sims, III, Kyle Way
  • Patent number: 6954827
    Abstract: A multi-way set-associative cache memory is configured to operate only with those ways of the tag and data memories that operate normally, and excludes those ways of the tag and data memories that are determined to be incurably defective. By reducing the size of the cache memory to exclude the defective cells, the present invention is capable of preventing scrapping or discarding of the entire high-priced processor chip in which CPU and cache memory are integrated into a single chip.
    Type: Grant
    Filed: December 2, 2002
    Date of Patent: October 11, 2005
    Assignee: Samsung Electronics, Co., Ltd.
    Inventors: Jae-Hong Park, Jong-Taek Kwak, Jin-Ho Kwack
  • Patent number: 6954822
    Abstract: Methods and apparatuses for mapping cache contents to memory arrays. In one embodiment, an apparatus includes a processor portion and a cache controller that maps the cache ways to memory banks. In one embodiment, each bank includes data from one cache way. In another embodiment, each bank includes data from each way. In another embodiment, memory array banks contain data corresponding to sequential cache lines.
    Type: Grant
    Filed: August 2, 2002
    Date of Patent: October 11, 2005
    Assignee: Intel Corporation
    Inventors: Kuljit S. Bains, Herbert Hum, John Halbert
  • Patent number: 6947052
    Abstract: In general, and in a form of the present invention, a method is provided for reducing execution time of a program executed on a digital system by improving hit rate in a cache of the digital system. This is done by determining cache performance during execution of the program over a period of time as a function of address locality, and then identifying occurrences of cache conflict between two program modules. One of the conflicting program modules is then relocated so that cache conflict is eliminated or at least reduced. In one embodiment of the invention, a 2D plot of cache operation is provided as a function of address versus time for the period of time. A set of cache misses having temporal locality and spatial locality is identified as a horizontally arranged grouping of pixels at a certain address locality having a selected color indicative of a cache miss.
    Type: Grant
    Filed: July 9, 2002
    Date of Patent: September 20, 2005
    Assignee: Texas Instruments Incorporated
    Inventor: Tor E. Jeremiassen
  • Patent number: 6938129
    Abstract: One embodiment of a distributed memory module cache includes tag memory and associated logic implemented at the memory controller end of a memory channel. The memory controller is coupled to at least one memory module by way of a point-to-point interface. The data cache and associated logic are located in one or more buffer components on each of the memory modules.
    Type: Grant
    Filed: December 31, 2001
    Date of Patent: August 30, 2005
    Assignee: Intel Corporation
    Inventor: Howard S. David
  • Patent number: 6924811
    Abstract: A method of storing a texel in a texel cache comprising reading a t coordinate of the texel, the t coordinate comprising a plurality of bits, reading a s coordinate of the texel, the s coordinate comprising a plurality of bits, forming an offset by concatenating bits of the t coordinate with bits of the s coordinate and forming an index by concatenating bits of the t coordinate with bits of the s coordinate is discussed.
    Type: Grant
    Filed: November 13, 2000
    Date of Patent: August 2, 2005
    Assignee: NVIDIA Corporation
    Inventor: Alexander L. Minkin
  • Patent number: 6918005
    Abstract: A method and apparatus are provided for caching free cell pointers pointing to memory buffers configured to store data traffic of network connections. In one example, the method stores free cell pointers into a pointer random access memory (RAM). At least one free cell pointer is temporarily stored into internal cache configured to assist in lowering a frequency of reads from and writes to the pointer RAM. A request is received from an external integrated circuit for free cell pointers. Free cell pointers are sent to queues of the external integrated circuit, wherein each free cell pointer in a queue is configured to become a write cell pointer. At least one write cell pointer and a corresponding cell descriptor is received from the external integrated circuit. Free cell pointer counter values are then calculated in order to keep track of the free cell pointers.
    Type: Grant
    Filed: October 18, 2001
    Date of Patent: July 12, 2005
    Assignee: Network Equipment Technologies, Inc.
    Inventors: Nils Marchant, Philip D. Cole
  • Patent number: 6915385
    Abstract: An apparatus and method for unaligned cache reads is implemented. Data signals on a system bus are remapped into a cache line wherein a plurality of data values to be read from the cache are output in a group-wise fashion. The remapping defines a grouping of the data values in the cache line. A multiplexer is coupled to each group of storage units containing the data values, wherein a multiplexer input is coupled to each storage unit in the corresponding group. A logic array coupled to each MUX generates a control signal for selecting the data value output from each MUX. The control signal is generated in response to the read address which is decoded by each logic array.
    Type: Grant
    Filed: July 30, 1999
    Date of Patent: July 5, 2005
    Assignee: International Business Machines Corporation
    Inventors: Terry Lee Leasure, George Mcneil Lattimore, Robert Anthony Ross, Jr., Gus Wai Yan Yeung
  • Patent number: 6915373
    Abstract: In a cache system a steering array indirect maps queries to the cache cells, and a cyclic replacement mechanism allocates the cache cells for replacement in the cache. The cache system has a hash mechanism, a steering array and a cyclic replacement counter. The hash mechanism computes a hash value from arguments in the query. The cache has a plurality of cache cells, and each cell has an answer and a usage bit indicating whether the cell is in use. The steering array stores a cache index based on the hash value, and the cache index points to a cache cell that may contain the answer to the query. The cyclic replacement counter addresses each cell in the cache to determine if the cell is still in use or may store a new answer.
    Type: Grant
    Filed: April 30, 2002
    Date of Patent: July 5, 2005
    Assignee: Microsoft Corporation
    Inventor: John G. Bennett
  • Patent number: 6912637
    Abstract: The present invention is related to a method and apparatus for managing memory in a network switch, wherein the memory includes the steps of providing a memory, wherein the memory includes a plurality of memory locations configured to store data therein and providing a memory address pool having a plurality of available memory addresses arranged therein, wherein each of the plurality of memory addresses corresponds to a specific memory location. The method further includes the steps of providing a memory address pointer, wherein the memory address pointer indicates a next available memory address in the memory address pool, and reading available memory addresses from the memory address pool using a last in first out operation. The method also includes writing released memory addresses into the memory address pool, adjusting a position of the memory address pointer upon a read or a write operation from the memory address pool.
    Type: Grant
    Filed: June 23, 2000
    Date of Patent: June 28, 2005
    Assignee: Broadcom Corporation
    Inventor: Joseph Herbst
  • Patent number: 6889292
    Abstract: Mechanisms and techniques disclose a system that provides access to data using a two part cache. The system receives a data access request containing a first data reference, such as an open systems request to access data. The system then obtains a history cache entry from a history cache based on the first data reference and obtains a partition cache entry from a partition cache based on the first data reference. Cache entries contain mappings between open systems reference locations and non-open systems references to locations in the data to be accessed. The system then performs a data access operation as specified by the data access request using a second data reference based upon either the history cache entry or the partition cache entry. Upon performance of the data access operation, the system then updates the history and partition caches with new cache entries and can resize the partition and history caches as needed.
    Type: Grant
    Filed: June 24, 2004
    Date of Patent: May 3, 2005
    Assignee: EMC Corporation
    Inventors: Jeffrey L. Alexander, Paul M. Bober, Rui Liang
  • Patent number: 6886171
    Abstract: A method and apparatus for input/output virtual address translation and validation assigns a range of memory to a device driver for its exclusive use. The device driver invokes system functionality for receiving a logical address and outputting a physical address having a length greater than the logical address. Another feature of the invention is a computer system providing input/output virtual address translation and validation for at least one peripheral device. In one embodiment, the computer system includes a scatter-gather table, an input/output virtual address cache memory associated with at least one peripheral device, and at least one device driver. In a further embodiment, the input/output virtual address cache memory includes an address validation cache and an address translation cache.
    Type: Grant
    Filed: February 20, 2001
    Date of Patent: April 26, 2005
    Assignee: Stratus Technologies Bermuda Ltd.
    Inventor: John MacLeod
  • Patent number: 6879981
    Abstract: The log file maintained by a DBMS is used, possibly in conjunction with hardware that listens to the communication between a computer and the storage controller to create cache buffers and a locking mechanism that enable applications running on one computer system to consistently access the data maintained and updated by a different computer.
    Type: Grant
    Filed: January 9, 2002
    Date of Patent: April 12, 2005
    Assignee: Corigin Ltd.
    Inventors: Michael Rothschild, Tsvi Misinai
  • Patent number: 6877069
    Abstract: An address translation logic and method for generating an instruction's operand address. The address generation logic includes an address generation circuit having adders that perform partial sum additions of the instruction operand's base register value with a displacement value in the instruction. The address generation logic also includes a carry prediction history block associated with the instruction that provides predicted carry-in values to the adders during the partial sum addition operation. In a related embodiment, the carry prediction history block that, in an advantageous embodiment, is appended to the instruction includes a predicted row access select (RAS) carry-in value, a predicted column access select (CAS) carry-in value and a confirmation flag that indicates whether the previous carry-in predictions for the previous predicted RAS and CAS carry-in values for the instruction were correct.
    Type: Grant
    Filed: March 28, 2002
    Date of Patent: April 5, 2005
    Assignee: International Business Machines Corporation
    Inventor: David Arnold Luick
  • Patent number: 6874056
    Abstract: A method and apparatus are disclosed for adaptively decreasing cache trashing in a cache memory device. Cache performance is improved by automatically detecting thrashing of a set and then providing one or more augmentation frames as additional cache space. In one embodiment, the augmentation frames are obtained by mapping the blocks that map to a thrashed set to one or more additional, less utilized sets. The disclosed cache thrashing reduction system initially identifies a set that is likely to be experiencing thrashing, referred to herein as a thrashed set. Once thrashing is detected, the cache thrashing reduction system selects one or more additional sets to augment a thrashed set, referred to herein as the augmentation sets. In this manner, blocks of main memory that are mapped to a thrashed set are now mapped to an expanded group of sets (the thrashed set and the augmentation sets).
    Type: Grant
    Filed: October 9, 2001
    Date of Patent: March 29, 2005
    Assignee: Agere Systems Inc.
    Inventors: Harry Dwyer, John Susantha Fernando
  • Patent number: 6874057
    Abstract: A method and apparatus are disclosed for allocating a section of a cache memory to one or more tasks. A set index value that identifies a corresponding set in the cache memory is transformed to a mapped set index value that constrains a given task to the corresponding allocated section of the cache. The allocated cache section of the cache can be varied by selecting an appropriate map function. When the map function is embodied as a logical and function, for example, individual sets can be included in an allocated section, for example, by setting a corresponding bit value to binary value of one. A cache addressing scheme is also disclosed that permits a desired portion of a cache to be selectively allocated to one or more tasks. A desired location and size of the allocated section of sets of the cache memory may be specified.
    Type: Grant
    Filed: October 9, 2001
    Date of Patent: March 29, 2005
    Assignee: Agere Systems Inc.
    Inventors: Harry Dwyer, John Susantha Fernando
  • Patent number: 6868472
    Abstract: In a cache memory control method and computer of the present invention, a cache memory is connected to a main memory and divided into a plurality of cache blocks, and a lock/unlock signal is supplied to the cache memory to either set a replace-inhibition state of at least one of the cache blocks in which replacing at least one of the cache blocks to the main memory is inhibited, or reset the replace-inhibition state of at least one of the cache clocks such that replacing at least one of the cache block to the main memory is allowed. Either reading or writing of the main memory is performed by using the remaining cache blocks of the cache memory, other than the at least one of the cache blocks, such that, when the replace-inhibition state is set by the lock/unlock signal, replacing the at least one of the cache blocks to the main memory is inhibited during the reading or writing of the main memory.
    Type: Grant
    Filed: September 28, 2000
    Date of Patent: March 15, 2005
    Assignee: Fujitsu Limited
    Inventors: Hideo Miyake, Atsuhiro Suga, Yasuki Nakamura, Teruhiko Kamigata, Hitoshi Yoda, Hiroshi Okano, Yoshio Hirose
  • Patent number: 6862675
    Abstract: A main memory and a higher-speed local memory are externally connected to a microprocessor. The entire load module is developed in the main memory. A part or all of the instruction codes in the load module developed in the main memory are stored in the local memory. A memory management unit for data converts a logical address of the entire load module into a physical address of the main memory. A memory management unit for instructions converts a logical address of the instruction code stored in the local memory into a physical address of the local memory. A CPU core gains the instruction code from the local memory at the time of execution of the instruction.
    Type: Grant
    Filed: August 31, 2000
    Date of Patent: March 1, 2005
    Assignee: Fujitsu Limited
    Inventor: Yasuhiro Wakimoto