Cache Patents (Class 345/557)
  • Patent number: 6928516
    Abstract: An image data processing system and method are disclosed in which image data is organized for fast and efficient transfer of image data to and from an image memory using a tile cache. Image data is stored in a memory having data words of a predetermined data width. Each data word includes a plural adjacently disposed image pixels of a single scan line. A set of consecutive data words corresponds to a two dimensional tile of the image whereby adjacent data words store image pixels of adjacent scan lines. The image data is transferred to a tile cache in these tiles. Following image processing on a tile of image data stored in the tile cache, the tile of image data is transferred back to the memory. The technique repeats for each tile of image data. Separate tiles of image data may be operated on by different data processors simultaneously.
    Type: Grant
    Filed: December 3, 2001
    Date of Patent: August 9, 2005
    Assignee: Texas Instruments Incorporated
    Inventor: Fred J. Reuter
  • Patent number: 6924812
    Abstract: A texture data reading apparatus includes a cache memory including a plurality of read ports and a plurality of regions to store pixel texture data. An address comparator includes a plurality of input ports to receive incoming pixels, wherein the address comparator compares the memory addresses associated with the incoming pixels to determine which regions of cache memory are accessed. A cache lookup device accesses new texture data from the cache memory for the incoming pixels in the same clock cycle in response to the number of memory regions accessed being less than or equal to the number of cache memory read ports.
    Type: Grant
    Filed: December 24, 2002
    Date of Patent: August 2, 2005
    Assignee: Intel Corporation
    Inventors: Satyaki Koneru, Steven J. Spangler, Val G. Cook
  • Patent number: 6924810
    Abstract: A dynamically configurable portion of a cache shared between central processing and graphics units in a highly integrated multimedia processor is engaged as a secondary level in a hierarchical texture cache architecture. The graphics unit includes a small multi-ported L1 texture cache local to its 2D/3D pipeline that is backed by the relatively large, single ported portion of the shared cache. Leveraging the shared cache as a secondary level texture cache reduces system memory bandwidth and die size without significant sacrifice in performance.
    Type: Grant
    Filed: November 18, 2002
    Date of Patent: August 2, 2005
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Brett A. Tischler
  • Patent number: 6924811
    Abstract: A method of storing a texel in a texel cache comprising reading a t coordinate of the texel, the t coordinate comprising a plurality of bits, reading a s coordinate of the texel, the s coordinate comprising a plurality of bits, forming an offset by concatenating bits of the t coordinate with bits of the s coordinate and forming an index by concatenating bits of the t coordinate with bits of the s coordinate is discussed.
    Type: Grant
    Filed: November 13, 2000
    Date of Patent: August 2, 2005
    Assignee: NVIDIA Corporation
    Inventor: Alexander L. Minkin
  • Patent number: 6919895
    Abstract: A method and apparatus which includes a graphics accelerator, circuitry responsive to pixel texture coordinates to select texels and generate therefrom a texture value for any pixel the color of which is to be modified by a texture, a cache to hold texels for use by the circuitry to generate texture value for any pixel, a stage for buffering the acquisition of texel data, and control circuitry for controlling the acquisition of texture data, storing the texture data in the cache, and furnishing the texture data for blending with pixel data.
    Type: Grant
    Filed: March 22, 1999
    Date of Patent: July 19, 2005
    Assignee: NVIDIA Corporation
    Inventors: Gopal Solanki, Kioumars Kevin Dawallu
  • Patent number: 6914609
    Abstract: A system and method for generating pixels for a display device. The system may include a sample buffer for storing a plurality samples in a memory, a sample cache for caching recently accessed samples, and a sample filter unit for filtering one or more samples to generate a pixel. The generated pixels may then be stored in a frame buffer or provided to a display device. The method operates to take advantage of the common samples shared by neighboring pixels in both the x and y directions for reduced sample buffer accesses and improved performance. The method involves reading samples from the memory that correspond to pixels in a plurality of neighboring scan lines, and possibly also to multiple pixels in each of these scan lines. The samples may be stored in a cache memory and then accessed from the cache memory for filtering. The method maximizes use of the common samples shared by neighboring pixels in both the x and y directions.
    Type: Grant
    Filed: February 28, 2002
    Date of Patent: July 5, 2005
    Assignee: Sun Microsystems, Inc.
    Inventors: Yan Yan Tang, Wayne Eric Burk, Philip C. Leung
  • Patent number: 6911987
    Abstract: A method and system for compressing bitmap data in a system for sharing an application running on a host computer with a remote computer, wherein the shared application's screen output is simultaneously displayed on both computers. Simultaneous display of screen output is achieved by efficiently transmitting display data between the host computer and the remote computer. When a font used by the host computer for displaying text is not available on the remote computer, the host computer sends a bitmap representation of the text for display, rather than the text itself. Bitmap representations are cached by the remote computer, so that the same bitmap representation need not be repeatedly transmitted from the host computer to the remote computer. Bitmap representations are compressed by the host computer prior to transmission, transmitted, then decompressed by the remote computer.
    Type: Grant
    Filed: May 8, 2000
    Date of Patent: June 28, 2005
    Assignees: Microsoft Corporation, PictureTel Corporation
    Inventors: Christopher J. Mairs, Anthony M. Downes, Roderick F. MacFarquhar, Kenneth P. Hughes, Alex J. Pollitt, John P. Batty, Mark E. Berry
  • Patent number: 6910103
    Abstract: A method of caching data is provided, which includes a plurality of processes 1602 to 1605, a cache manager 813 and a data type register 805 including at least one data type 1901 and a corresponding data type bit 1903. Said data type bit 1903 is set (1904) within the register 805 on being accessed by each of said processes and subsequently reset (1905) within the register. The cache manager 813 restores (1501) each of said set data type bit and identifies its corresponding data type 1901. The cache manager writes the output data 1609, 1610, 1611 of each of said processes 1603, 1604, 1605 within a memory cache 2001 and said cache manager resets (1505) said memory cache 2001 when the data type bit set by the last of said processes 1602 is reset.
    Type: Grant
    Filed: August 29, 2002
    Date of Patent: June 21, 2005
    Assignee: Autodesk Canada Inc.
    Inventor: Itai Danan
  • Patent number: 6891543
    Abstract: A method and system according to the present invention provide for sharing memory between applications running on one or more CPUs, and acceleration co-processors, such as graphics processors, of a computer system in which the memory may retain its optimal caching and access attributes favorable to the maximum performance of both CPU and graphics processor. The method involves a division of ownership within which the shared memory is made coherent with respect to the previous owner, prior to handing placing the shared memory in the view the next owner. This arbitration may involve interfaces within which ownership is transitioned from one client to another. Within such transition of ownership the memory may be changed from one view to another by actively altering the processor caching attributes of the shared memory as well as via the use of processor low-level cache control instructions, and/or graphics processor render flush algorithms which serve to enforce data coherency.
    Type: Grant
    Filed: May 8, 2002
    Date of Patent: May 10, 2005
    Assignee: Intel Corporation
    Inventor: David A. Wyatt
  • Patent number: 6891546
    Abstract: A cache memory for a texture mapping process which is applicable to a high performance three-dimensional graphics card for a personal computer, three-dimensional game machines and other fields requiring small and high performance three-dimensional graphics. In particular, in order to accelerate a texture mapping process based upon a hardware-used mipmapping process using a trilinear interpolation in a three-dimensional graphics system, there is provided a cache memory in which only textures by a moderate size of a working set are stored, and all eight texels needed to perform a trilinear interpolation only in one clock cycle are accessed to obtain a final texel value, and a method enabling a reduction in penalty due to a cache miss by, with hardware-based prediction, prefetching textures to be needed in the future.
    Type: Grant
    Filed: June 19, 2000
    Date of Patent: May 10, 2005
    Assignee: Korea Advanced Institute of Science and Technology
    Inventors: Se Jeong Park, Hoi Jun Yoo, Kyu Ho Park
  • Patent number: 6889289
    Abstract: A system and method for distributed cache. Cache tag storage and cache data storage are maintained in separate pipeline stages. Cache tag storage is operated by a data producer. Cache data storage is operated by a data consumer. Cache hits and misses are determined by the data producer prior to any operations being performed by the processor. In the event of a cache miss, produced data is sent to the processor to be processed. In the event of a cache hit, the cache address of the corresponding previously processed data is sent to the data consumer so that the corresponding processed data unit can be retrieved from cache data storage.
    Type: Grant
    Filed: June 7, 2004
    Date of Patent: May 3, 2005
    Assignee: Micron Technology, Inc.
    Inventors: Neal A. Crook, Alan Wootton
  • Patent number: 6885378
    Abstract: According to one embodiment, a computer system is disclosed. The computer system includes a graphics accelerator and a graphics cache coupled to the graphics accelerator. The graphics cache stores texture data, color data and depth data.
    Type: Grant
    Filed: September 28, 2000
    Date of Patent: April 26, 2005
    Assignee: Intel Corporation
    Inventors: Hsin-Chu Tsai, Subramaniam Maiyuran, Chung-Chi Wang
  • Patent number: 6862028
    Abstract: A computer graphics system is provided that includes a memory to store image data, a bin pointer list to store information regarding a plurality of image subscenes, and a pointer cache system to maintain data regarding the plurality of image subscenes. The pointer cache system may include a tag array section, a data array section and a decoupling section.
    Type: Grant
    Filed: February 14, 2002
    Date of Patent: March 1, 2005
    Assignee: Intel Corporation
    Inventors: Jonathan B. Sadowski, Aditya Navale
  • Patent number: 6862027
    Abstract: A CPU module includes a host element configured to perform a high-level host-related task, and one or more data-generating processing elements configured to perform a data-generating task associated with the high-level host-related task. Each data-generating processing element includes logic configured to receive input data, and logic configured to process the input data to produce output data. The amount of output data is greater than an amount of input data, and the ratio of the amount of input data to the amount of output data defines a decompression ratio. In one implementation, the high-level host-related task performed by the host element pertains to a high-level graphics processing task, and the data-generating task pertains to the generation of geometry data (such as triangle vertices) for use within the high-level graphics processing task. The CPU module can transfer the output data to a GPU module via at least one locked set of a cache memory.
    Type: Grant
    Filed: June 30, 2003
    Date of Patent: March 1, 2005
    Assignee: Microsoft Corp.
    Inventors: Jeffrey A. Andrews, Nicholas R. Baker, J. Andrew Goossen, Michael Abrash
  • Patent number: 6859208
    Abstract: A memory controller hub includes a graphics subsystem adapted to perform graphics operations on data and a cache adapted to store of locations in physical memory available to the graphics subsystem for storing graphics data and available to a graphics controller coupled to the memory controller hub to store graphics data.
    Type: Grant
    Filed: September 29, 2000
    Date of Patent: February 22, 2005
    Assignee: Intel Corporation
    Inventor: Bryan R. White
  • Publication number: 20040263519
    Abstract: A CPU module includes a host element configured to perform a high-level host-related task, and one or more data-generating processing elements configured to perform a data-generating task associated with the high-level host-related task. Each data-generating processing element includes logic configured to receive input data, and logic configured to process the input data to produce output data. The amount of output data is greater than an amount of input data, and the ratio of the amount of input data to the amount of output data defines a decompression ratio. In one implementation, the high-level host-related task performed by the host element pertains to a high-level graphics processing task, and the data-generating task pertains to the generation of geometry data (such as triangle vertices) for use within the high-level graphics processing task. The CPU module can transfer the output data to a GPU module via at least one locked set of a cache memory.
    Type: Application
    Filed: June 30, 2003
    Publication date: December 30, 2004
    Applicant: Microsoft Corporation
    Inventors: Jeffrey A. Andrews, Nicholas R. Baker, J. Andrew Goossen, Michael Abrash
  • Publication number: 20040246260
    Abstract: An effective structure of a pixel cache for use in a three-dimensional (3D) graphics accelerator is provided. The pixel cache includes a z-data storage unit that reads z-data from a frame memory and provides the read z-data to a pixel rasterization pipeline; and a color data storage unit that in advance reads and stores color data from the frame memory at the same time when the z-data storage unit reads the z-data from the frame memory, and provides the color data to the pixel rasterization pipeline only when the result of predetermined z-test is determined to be a success in the pixel rasterization pipeline. Accordingly, the pixel cache structure enables only color data required to be read and stored in advance before processing of the color data, thereby preventing access latency, increasing the efficiency of a color cache, and reducing power consumption.
    Type: Application
    Filed: December 10, 2003
    Publication date: December 9, 2004
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jae-hyun Kim, Yong-je Kim, Tack-don Han, Woo-chan Park, Gil-hwan Lee, Il-san Kim
  • Publication number: 20040239680
    Abstract: A cache memory with improved cache-miss performance is implemented by providing cache-miss data from system memory directly to its requester. One embodiment of the invention operates as a texture cache in a graphics system. The graphics system comprises a system memory that stores texture data, coupled to a texture cache memory, which is coupled to at least one requester. The texture cache memory is divided into a cache tags unit and a data cache unit. The data cache unit is configured to receive at least two cache address inputs, and has at least two data output ports each coupled to a respective first input of a respective multiplexer. A respective second input of each multiplexer is configured to receive cache-miss data from the system memory. The select input of each multiplexer is configured to receive a respective hit/miss indicator signal associated with the respective cache address input. In case of a cache-miss, cache-miss data from system memory bypasses the data cache unit and is output directly.
    Type: Application
    Filed: March 31, 2003
    Publication date: December 2, 2004
    Inventor: Brian D. Emberling
  • Patent number: 6825848
    Abstract: A synchronized two-level cache including a Level 1 cache and a Level 2 cache is implemented in a graphics processing system. The Level 2 cache is further partitioned into a number of slots which are dynamically allocated to texture maps as needed. The reference counter of each of the cache lines in each cache level is tracked so that a cache line is not overwritten with new data prior to transferring old data out to the recipient device. The age status of each cache line is tracked so that the oldest cache line is overwritten first. The use of synchronized two-level cache system conserves system memory bandwidth and reduces memory latency, thereby improving the graphics processing system's performance.
    Type: Grant
    Filed: September 17, 1999
    Date of Patent: November 30, 2004
    Assignee: S3 Graphics Co., Ltd.
    Inventors: Chih-Hong Fu, I-Chung Ling, Huai-Shih Hsu
  • Publication number: 20040233208
    Abstract: An efficient graphics pipeline with a pixel cache and data pre-fetching. By combining the use of a pixel cache in the graphics pipeline and the pre-fetching of data into the pixel cache, the graphics pipeline of the present invention is able to take best advantage of the high bandwidth of the memory system while effectively masking the latency of the memory system. More particularly, advantageous reuse of pixel data is enabled by caching, which when combined with pre-fetching masks the memory latency and delivers high throughput. As such, the present invention provides a novel and superior graphics pipeline over the prior art in terms of more efficient data access and much greater throughput. In one embodiment, the present invention is practiced within a computer system having a processor for issuing commands; a memory sub-system for storing information including graphics data; and a graphics sub-system for processing the graphics data according to the commands from the processor.
    Type: Application
    Filed: June 29, 2004
    Publication date: November 25, 2004
    Applicant: Microsoft Corporation
    Inventor: Zahid Hussain
  • Patent number: 6822655
    Abstract: A method and apparatus in a data processing system for processing a request to display a pattern. A plurality of partitions is created in a memory in a graphics adapter in the data processing system, wherein each partition within the plurality of partitions has a size equal to each of the other partitions within the plurality partitions. A determination is made as to whether the pattern is present within the plurality of partitions. The pattern is displayed using the plurality of partitions if the pattern is present within the plurality of partitions. The pattern is retrieved from another location if the pattern is absent from the plurality of partitions. Responsive to retrieving the pattern from another location, the pattern is stored if the pattern is within the size.
    Type: Grant
    Filed: July 20, 2000
    Date of Patent: November 23, 2004
    Assignee: International Business Machines Corporation
    Inventors: Neal Richard Marion, George F. Ramsay, III
  • Patent number: 6822654
    Abstract: At least one chip of a chipset in a computer system having at least one host processor and a host memory are described herein. In one aspect of the invention, an exemplary chip includes an interconnect, a memory interface coupled to the interconnect, the memory interface providing access to the host memory and controlling memory refresh and memory access, a host interface coupled to the interconnect, the host interface providing access to the host processor, and a programmable media processor coupled to the interconnect, the media processor accessing the host through the host interface and the media processor accessing the host memory through the memory interface, wherein the media processor processes time based media.
    Type: Grant
    Filed: December 31, 2001
    Date of Patent: November 23, 2004
    Assignee: Apple Computer, Inc.
    Inventors: Sushma Shrikant Trivedi, Joseph P. Bratt, Jack Benkual, Vaughn Todd Arnold, Yutaka Takahashi, Steven Todd Weybrew, Derek Fujio Iwamoto, David Ligon
  • Publication number: 20040222997
    Abstract: A system and method for distributed cache. Cache tag storage and cache data storage are maintained in separate pipeline stages. Cache tag storage is operated by a data producer. Cache data storage is operated by a data consumer. Cache hits and misses are determined by the data producer prior to any operations being performed by the processor. In the event of a cache miss, produced data is sent to the processor to be processed. In the event of a cache hit, the cache address of the corresponding previously processed data is sent to the data consumer so that the corresponding processed data unit can be retrieved from cache data storage.
    Type: Application
    Filed: June 7, 2004
    Publication date: November 11, 2004
    Inventors: Neal A. Crook, Alan Wootton
  • Patent number: 6812929
    Abstract: A graphics system may include a frame buffer that includes several sets of one or more memory banks and a cache. The frame buffer may load data from one of the memory banks into the cache in response to receiving a cache fill request. Each set of memory banks is accessible independently of each other set of memory banks. A frame buffer interface coupled to the frame buffer includes a plurality of cache fill request queues. Each cache fill request queue is configured to store one or more cache fill requests targeting a corresponding one of the sets of memory banks. The frame buffer interface is configured to select a cache fill request from one of the cache fill request queues that stores cache fill requests targeting a set of memory banks that is not currently being accessed and to provide the selected cache fill request to the frame buffer.
    Type: Grant
    Filed: March 11, 2002
    Date of Patent: November 2, 2004
    Assignee: Sun Microsystems, Inc.
    Inventors: Michael G. Lavelle, Ewa M. Kubalska, Yan Yan Tang
  • Patent number: 6801208
    Abstract: A system and method for cache sharing. The system is a microprocessor comprising a processor core and a graphics engine, each coupled to a cache memory. The microprocessor also includes a driver to direct how the cache memory is shared by the processor core and the graphics engine. The method comprises receiving a memory request from a graphics application program and determining whether a cache memory that may be shared between a processor core and a cache memory is available to be shared. If the cache memory is available to be shared, a first portion of the cache memory is allocated to the processor core and a second portion of the cache memory is allocated to the graphics engine. The method and microprocessor may be included in a computing device.
    Type: Grant
    Filed: December 27, 2000
    Date of Patent: October 5, 2004
    Assignee: Intel Corporation
    Inventors: Jagganath Keshava, Vladimir Pentkovski, Subramaniam Maiyuran, Salvador Palanca, Hsin-Chu Tsai
  • Patent number: 6801209
    Abstract: A method and apparatus for storing image/video data in a memory device. The method includes receiving an image consisting of a plurality of pixels. In addition, the method includes generating addresses in the memory device for pixels from the image, wherein the memory addresses are generated within memory blocks consisting of multiple rows, wherein each row of the memory block is shorter in length than a full line of the memory device, wherein each memory block is aligned within a boundary of the memory device.
    Type: Grant
    Filed: December 30, 1999
    Date of Patent: October 5, 2004
    Assignee: Intel Corporation
    Inventors: Yen-Kuang Chen, Boon-Lock Yeo
  • Patent number: 6801207
    Abstract: A highly integrated multimedia processor employs a shared cache between tightly coupled central processing and graphics units to provide the graphics unit access to data retrieved from system memory or data processed by the central processing unit before the data is written-back or written-through to system memory, thus reducing system memory bandwidth requirements. Regions in the shared cache can also be selectively locked down thereby disabling eviction or invalidation of a selected region, to provide the graphics unit with a local scratchpad area for applications such as, but not limited to, temporary video line buffering storage for filter applications and composite buffering for blending texture maps in multi-pass rendering.
    Type: Grant
    Filed: October 9, 1998
    Date of Patent: October 5, 2004
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Brett A. Tischler, Carl D. Dietz, David F. Bremner, David T. Harper
  • Publication number: 20040189653
    Abstract: A method, apparatus, and system for rendering are disclosed. A rendering request is defined, where the rendering request describes an object to be rendered. A progressive cache is queried to determine a cached element most representing a display image satisfying the rendering request. The cached element is sent to a starting stage of a rendering pipeline for the object, where the starting stage is associated with the cached element. An output of the starting stage is sent to an input of a next stage of the rendering pipeline. A final stage of the rendering pipeline determines the display image satisfying the rendering request.
    Type: Application
    Filed: March 16, 2004
    Publication date: September 30, 2004
    Inventors: Ronald N. Perry, Sarah F. Frisken
  • Publication number: 20040189652
    Abstract: A method for optimizing a cache memory used for multitexturing in a graphics system is implemented. The graphics system comprises a texture memory, which stores texture data comprised in texture maps, coupled to a texture cache memory. Active texture maps for an individual primitive, for example a triangle, are identified, and the texture cache memory is divided into partitions. In one embodiment, the number of texture cache memory partitions equals the number of active texture maps. Each texture cache memory partition corresponds to a respective single active texture map, and is operated as a direct mapped cache for its corresponding respective single active texture map. In one embodiment, each texture cache memory partition is further operated as an associative cache for the texture data comprised in the partition's corresponding respective single active texture map. The cache memory is dynamically re-configured for each primitive.
    Type: Application
    Filed: March 31, 2003
    Publication date: September 30, 2004
    Inventor: Brian D. Emberling
  • Patent number: 6798421
    Abstract: A tile-oriented graphics processing system in which an additional level of caching is provided locally, at the output of a patch-processing graphics computation block. This additional local storage buffers the current tile, so that repeated accesses to the same tile can avoid pipelining delays connected with access to the main cache. (Even an on-chip cache, in a large chip, can impose access delays which are significant in relation to the computation speeds involved.
    Type: Grant
    Filed: February 28, 2002
    Date of Patent: September 28, 2004
    Assignee: 3D Labs, Inc. Ltd.
    Inventor: David Robert Baldwin
  • Patent number: 6795081
    Abstract: A system and method capable of super-sampling and performing super-sample convolution are disclosed. In one embodiment, the system may comprise a graphics processor, a frame buffer, a sample cache, and a sample-to-pixel calculation unit. The graphics processor may be configured to generate a plurality of samples. The frame buffer, which is coupled to the graphics processor, may be configured to store the samples in a sample buffer. The samples may be positioned according to a regular grid, a perturbed regular grid, or a stochastic grid. The sample-to-pixel calculation unit is programmable to select a variable number of stored samples from the frame buffer, copy the selected samples to a sample cache, and filter a set of the selected samples into an output pixel. The sample-to-pixel calculation unit retains those samples in the sample cache that will be reused in a subsequent pixel calculation and replaces those samples no longer required with new samples for another filter calculation.
    Type: Grant
    Filed: May 18, 2001
    Date of Patent: September 21, 2004
    Assignee: Sun Microsystems, Inc.
    Inventors: Michael G. Lavelle, Philip C. Leung, Yan Y. Tang
  • Patent number: 6795078
    Abstract: A memory interface controls read and write accesses to a memory device. The memory device includes a level-one cache, level-two cache and storage cell array. The memory interface includes a data request processor (DRP), a memory control processor (MCP) and a block cleansing unit (BCU). The MCP controls transfers between the storage cell array, the level-two cache and the level-one cache. In response to a read request with associated read clear indication, the DRP controls a read from a level-one cache block, updates bits in a corresponding dirty tag, and sets a mode indicator of the dirty tag to a the read clear mode. The modified dirty tag bits and mode indicator are signals to the BCU that the level-one cache block requires a source clear operation. The BCU commands the transfer of data from a color fill block in the level-one cache to the level-two cache.
    Type: Grant
    Filed: January 31, 2002
    Date of Patent: September 21, 2004
    Assignee: Sun Microsystems, Inc.
    Inventors: Michael G. Lavelle, Ewa M. Kubalska, Yan Y. Tang
  • Patent number: 6791560
    Abstract: A vertex data access apparatus and method. The apparatus receives a vertex index, compares the vertex index with any vertices' indices used before, issues a request if necessary for fetching vertex data in system memory, stores the return vertex data in a vertex data queue and gets corresponding vertex data from the vertex data queue for further processing and, more particularly, if the vertex index is the same as one of those vertices' indices, the corresponding vertex data can be directly fetched from the vertex data queue. The vertex data queue performs the vertex cache function.
    Type: Grant
    Filed: May 10, 2002
    Date of Patent: September 14, 2004
    Assignee: Silicon Integrated Systems Corp.
    Inventor: Chung-Yen Lu
  • Patent number: 6791552
    Abstract: Digital video processing apparatus comprising: a plurality of render processors arranged in an operational sequence, each operable to render an output result relating to an image of a video signal from input data relating to that and/or other images received from a preceding render processor in the operational sequence; and a render controller for controlling rendering operation of the render processors; each render processor being operable to communicate dependency data to the render controller, indicating which images must be rendered by a preceding render processor in order for that render processor to render output data relating to a required image; and the render controller being operable to control operation of the render processors so that images required by each render processor are rendered by preceding render processors in the operational sequence.
    Type: Grant
    Filed: July 28, 1999
    Date of Patent: September 14, 2004
    Assignee: Sony United Kingdom Limited
    Inventors: Antony James Gould, Jonathan James Stone
  • Patent number: 6791559
    Abstract: A 3D graphics accelerator in which vertex data is locally cached, at individual rendering subsystems, in circular buffers which are NOT large enough to hold the maximum number of data fields for the maximum number of vertices which can be parallel-processed. Instead, the circular buffers are preferably made large enough to hold the maximum number of data fields for a minimum useful number of vertices; the same buffers can also be used to hold a smaller number of data fields for the maximum number of vertices.
    Type: Grant
    Filed: February 28, 2002
    Date of Patent: September 14, 2004
    Assignee: 3DLabs Inc., LTD
    Inventor: David Robert Baldwin
  • Patent number: 6784892
    Abstract: A graphics processing system including a cache memory circuit coupled to the graphics processor and the address and data busses for storing graphics data according to a respective address. The cache memory includes first and second memories coupled together by a plurality of activation lines. The first memory has a corresponding plurality of address detection units to store addresses and provide activation signals in response to receiving a matching address. The second memory includes a corresponding plurality of data storage locations. Each data storage location is coupled to a respective one of the plurality of address storage locations by a respective activation line to provide graphics data in response to receiving an activation signal from the respective address storage location.
    Type: Grant
    Filed: October 5, 2000
    Date of Patent: August 31, 2004
    Assignee: Micron Technology, Inc.
    Inventor: Aaftab Munshi
  • Patent number: 6778179
    Abstract: An external cache management unit for use with a 3D-RAM frame buffer and suitable for use in a computer graphics system is described. The unit may reduce power consumption within the 3D-RAM by performing partial block write-back according to status information stored in an array of dirty tag bits. Periodic level one cache block cleansing is provided for during empty memory cycles.
    Type: Grant
    Filed: October 3, 2001
    Date of Patent: August 17, 2004
    Assignee: Sun Microsystems, Inc.
    Inventors: Michael G. Lavelle, Ewa M. Kubalska, Yan Yan Tang
  • Publication number: 20040155885
    Abstract: A cache for a graphics system storing both an address tag and an identification number for each block of data stored in the data cache. An address and identification number of a requested block of data is provided to the cache, and is checked against all of the address and identification number entries present. A block of data is provided if both the address and the identification number of the requested data matches an entry in the cache. However, if the address of the requested data is not present, or if the address matches an entry but the associated identification number does not match, a cache miss occurs, and the requested graphics data must be retrieved from a system memory. The address and identification number are updated, and the requested data replaces the former graphics data in the data cache. As a result, a block of data stored in the cache having the same address as the requested data, but having data that is invalid, can be invalidated without invalidating the entire cache.
    Type: Application
    Filed: February 9, 2004
    Publication date: August 12, 2004
    Inventors: Aaftab Munshi, James R. Peterson
  • Publication number: 20040135784
    Abstract: A system and method for caching and rendering an image database enables predictive loading of unrequested portions of the image. A raw image is preprocessed and subdivided into tiles. As a portion of a raw image is displayed on a screen and the user zooms and pans the image, a predicting algorithm determines which additional tiles should be loaded into cache so that the user suffers no lag time as additional tiles not in cache are loaded. The present system and method is adaptable to both raster and vector images.
    Type: Application
    Filed: July 3, 2003
    Publication date: July 15, 2004
    Inventors: Andrew Cohen, Scott Crouch
  • Patent number: 6763420
    Abstract: A plurality of cache addressing functions are stored in main memory. A processor which executes a program selects one of the stored cache addressing functions for use in a caching operation during execution of a program by the processor.
    Type: Grant
    Filed: July 13, 2001
    Date of Patent: July 13, 2004
    Assignee: Micron Technology, Inc.
    Inventor: Ole Bentz
  • Publication number: 20040119719
    Abstract: A texture data reading apparatus includes a cache memory including a plurality of read ports and a plurality of regions to store pixel texture data. An address comparator includes a plurality of input ports to receive incoming pixels, wherein the address comparator compares the memory addresses associated with the incoming pixels to determine which regions of cache memory are accessed. A cache lookup device accesses new texture data from the cache memory for the incoming pixels in the same clock cycle in response to the number of memory regions accessed being less than or equal to the number of cache memory read ports.
    Type: Application
    Filed: December 24, 2002
    Publication date: June 24, 2004
    Inventors: Satyaki Koneru, Steven J. Spangler, Val G. Cook
  • Patent number: 6750872
    Abstract: A graphics processing system has a cache which is partitionable into two or more slots. Once partitioned, the slots are dynamically allocatable to one or more texture maps. First, number of texture maps needed to render a given scene is determined. Then, available slots of the cache are allocated to the texture maps. Sometimes, more slots are allocated to the largest texture map. At other times, more slots are allocated to the texture map which is likely to be used most often. The slots can also be allocated equally to all of the texture maps needed.
    Type: Grant
    Filed: September 17, 1999
    Date of Patent: June 15, 2004
    Assignee: S3 Graphics, Co., Ltd.
    Inventors: Zhou Hong, Chih-Hong Fu
  • Patent number: 6747657
    Abstract: A depth write disable apparatus and method for controlling evictions, such as depth values, from a depth cache to a corresponding depth buffer in a zone rendering system. When the depth write disable circuitry is enabled, evictions from the depth cache (as which typically occur during the rendering of the next zone) to the depth buffer are prevented. In particular, once the depth buffer is initialized (i.e. cleared) to a constant value at the beginning of a scene, the depth buffer does not need to be read. The depth cache handles intermediate depth reads and writes within each zone. Since the memory resident depth buffer is not required after a scene is rendered, it never needs to be written. The final depth values for a zone can thus be discarded (i.e., rather than written to the depth buffer) after each zone is rendering.
    Type: Grant
    Filed: December 31, 2001
    Date of Patent: June 8, 2004
    Assignee: Intel Corporation
    Inventors: Peter L. Doyle, Aditya Sreenivas
  • Patent number: 6744438
    Abstract: A graphics processing unit which both pre-fetches and preloads texture data. Preferably a cache line is preassigned to the texture data approximately as soon as a miss occurs.
    Type: Grant
    Filed: June 9, 2000
    Date of Patent: June 1, 2004
    Assignee: 3Dlabs Inc., Ltd.
    Inventor: David Robert Baldwin
  • Patent number: 6741256
    Abstract: A predictive optimizing unit for use with an interleaved memory and suitable for use in a computer graphics system is described. The unit maintains a queue of pending requests for data from the memory, and prioritizes precharging and activating interleaves with pending requests. Interleaves which are in a ready state may be accessed independently of the precharging and activation of non-ready interleaves.
    Type: Grant
    Filed: August 27, 2001
    Date of Patent: May 25, 2004
    Assignee: Sun Microsystems, Inc.
    Inventor: Brian D. Emberling
  • Patent number: 6734867
    Abstract: A cache for a graphics system storing both an address tag and an identification number for each block of data stored in the data cache. An address and identification number of a requested block of data is provided to the cache, and is checked against all of the address and identification number entries present. A block of data is provided if both the address and the identification number of the requested data matches an entry in the cache. However, if the address of the requested data is not present, or if the address matches an entry but the associated identification number does not match, a cache miss occurs, and the requested graphics data must be retrieved from a system memory. The address and identification number are updated, and the requested data replaces the former graphics data in the data cache. As a result, a block of data stored in the cache having the same address as the requested data, but having data that is invalid, can be invalidated without invalidating the entire cache.
    Type: Grant
    Filed: June 28, 2000
    Date of Patent: May 11, 2004
    Assignee: Micron Technology, Inc.
    Inventors: Aaftab Munshi, James R. Peterson
  • Patent number: 6734872
    Abstract: A system, method, and program for optimally caching in a memory system an overlay instance. The system includes a local memory and a rasterizing processor coupled to the local memory. Responsive to receipt of a presentation requirement specifying an overlay stored in a memory device, the rasterizing processor determines whether an overlay instance for the overlay is cached in a memory system. Responsive to the overlay instance not being cached in the memory system, the rasterizing processor generates a new overlay instance for the overlay and caches the new overlay instance in the memory system. Responsive to the overlay instance being cached in the memory system, the rasterizing processor produces another overlay instance tailored to the presentation requirements, compares the another overlay instance to the cached overlay instance and then caches into the memory system only one overlay instance among the another overlay instance and the cached overlay instance that best presents the overlay.
    Type: Grant
    Filed: May 15, 2000
    Date of Patent: May 11, 2004
    Assignee: International Business Machines Corporation
    Inventors: John Thomas Varga, Rose Ellen Visoski
  • Patent number: 6720969
    Abstract: An external cache management unit for use with a 3D-RAM frame buffer and suitable for use in a computer graphics system is described. The unit may reduce power consumption within the 3D-RAM by performing partial block write-back according to status information stored in an array of dirty tag bits. Periodic level one cache block cleansing is provided for during empty memory cycles.
    Type: Grant
    Filed: May 18, 2001
    Date of Patent: April 13, 2004
    Assignee: Sun Microsystems, Inc.
    Inventors: Michael G. Lavelle, Ewa M. Kubalska, Yan Yan Tang
  • Patent number: 6721846
    Abstract: The present invention is directed to a system and method of providing a data image cache. A system suitable for transferring data images includes an image cache suitable for being accessed locally, the image cache suitable for storing at least one data image. Image storage is connected to the image cache over a network, the image storage including a plurality of data images. Wherein, a query is received by the image cache for a data image not included in the image cache, a data image corresponding to the queried image is transferred from the image storage to the image cache over the network.
    Type: Grant
    Filed: December 28, 2000
    Date of Patent: April 13, 2004
    Assignee: Gateway, Inc.
    Inventors: Keith L. Mund, Jonathan Johansen
  • Patent number: 6717583
    Abstract: In order to reduce degradation of the processing performance of the data processor due to use of a part of the main memory as a display frame buffer, when an access request to the memory 200 is generated from the CPU 310, the memory controller 400 holds it once, requests the display controller 560 to stop the access to the memory 200 which is in execution, when data to the access executed already is transferred from the memory 200, holds it, and transfers the access request from the CPU bus 310 which is held by the memory 200. When the access from the CPU bus 310 ends, the memory controller 400 restarts the access stopped in the display controller 560 and passes the held data to the display controller 560.
    Type: Grant
    Filed: November 26, 2001
    Date of Patent: April 6, 2004
    Assignee: Hitachi, Ltd.
    Inventors: Tetsuya Shimomura, Shigeru Matsuo, Koyo Katsura, Tatsuki Inuzuka, Yasuhiro Nakatsuka