Cache Patents (Class 345/557)
-
Patent number: 8095885Abstract: Methods and apparatus provide for a Cache Manager to display a scaled image in an active view. The scaled image comprises the same content as an image from a stored collection of images, which is accessible by a plurality of active views in a user interface. The Cache Manager refreshes the active view to replace the displayed scaled image with a larger-sized image, which comprises the same content as the displayed scaled image. A reference counter associated with the larger-sized image is maintained to keep track of instances of display of the larger-sized image by any of the plurality of active views. The Cache Manager allows any other active view to use the larger-sized image while a current value of the reference counter is greater than zero.Type: GrantFiled: March 24, 2008Date of Patent: January 10, 2012Assignee: Adobe Systems IncorporatedInventor: Claire Elise Kahan Schendel
-
Patent number: 8094160Abstract: A moving-picture processing apparatus has a pre-fetch memory pre-fetching a portion of a decoded picture stored in an external memory, and a miss/hit determination unit determining a manner in which a miss occurs in response to a read request to the pre-fetch memory.Type: GrantFiled: September 4, 2007Date of Patent: January 10, 2012Assignee: Fujitsu LimitedInventors: Yasuhiro Watanabe, Mitsuharu Wakayoshi, Naoyuki Takeshita
-
Patent number: 8044960Abstract: A character display apparatus searches through a cache means for vector font data which match character attributes of a character to be displayed to acquire the vector font data, and, when no vector font data which match the character attributes of the character to be displayed exist in the cache means, acquires the vector font data from a large-volume storage means. If there exists no corresponding luminance image data in the cache means, the character display apparatus acquires luminance image data generated from the vector font data. If there exists no corresponding display image in the cache means, the character display apparatus acquires display character image data generated from the luminance image data. The character display apparatus displays the display image which is thus acquired and which matches the character attributes of the character to be displayed on a display device.Type: GrantFiled: November 26, 2008Date of Patent: October 25, 2011Assignee: Mitsubishi Electric CorporationInventors: Mitsumasa Sakurai, Yuusuke Yokosuka, Shoji Tanaka
-
Publication number: 20110255791Abstract: Systems, methods and computer-readable storage media are disclosed for accelerating bitmap remoting by extracting patterns from source bitmaps. A server takes a source image, and performs an edge-detection operation on it. From this edge-detected image, connected segments of the image are determined by executing multiple iterations of a small operation upon the image in parallel—for instance, by assigning each non-white pixel a unique value, then assigning each pixel the minimum value among itself and its neighbors until no pixel is assigned a new value in an iteration. Executing these operations in parallel greatly reduces the time required to identify the connected segments. When the segments are identified, they may be cached by the client so that they do not need to be re-sent to the client when re-encountered by the server.Type: ApplicationFiled: April 15, 2010Publication date: October 20, 2011Applicant: Microsoft CorporationInventors: Nadim Y. Abdo, Voicu Anton Albu, Charles Lawrence Zitnick, III, Max Alan McMullen
-
Patent number: 8041903Abstract: A processor and a memory controlling method. The processor enables a Scratch-Pad Memory (SPM) to prepare data that a processor core intends to process, using a data management unit including a data cache, thereby increasing a data processing rate.Type: GrantFiled: February 17, 2009Date of Patent: October 18, 2011Assignee: Samsung Electronics Co., Ltd.Inventors: Kyoung June Min, Chan Min Park, Won Jong Lee, Kwon Taek Kwon
-
Patent number: 8035650Abstract: Caching techniques for storing instructions, constant values, and other types of data for multiple software programs are described. A cache provides storage for multiple programs and is partitioned into multiple tiles. Each tile is assignable to one program. Each program may be assigned any number of tiles based on the program's cache usage, the available tiles, and/or other factors. A cache controller identifies the tiles assigned to the programs and generates cache addresses for accessing the cache. The cache may be partitioned into physical tiles. The cache controller may assign logical tiles to the programs and may map the logical tiles to the physical tiles within the cache. The use of logical and physical tiles may simplify assignment and management of the tiles.Type: GrantFiled: July 25, 2006Date of Patent: October 11, 2011Assignee: QUALCOMM IncorporatedInventors: Yun Du, Guofang Jiao, Chun Yu, De Dzwo Hsu
-
Patent number: 8032715Abstract: The data processor enhances the bus throughput or data throughput of an external memory, when there are frequent continuous reads with a smaller data size than the data bus width of the external memory. The data processor includes a memory control unit being capable of controlling in response to a clock an external memory having plural banks that are individually independently controllable, plural buses connected to the memory control unit, and circuit modules capable of commanding memory accesses, which are provided in correspondence with each of the buses. The memory control unit contains bank caches each corresponding to the banks of the external memory.Type: GrantFiled: August 2, 2010Date of Patent: October 4, 2011Assignee: Renesas Electronics CorporationInventors: Fumie Katsuki, Takanobu Naruse, Chiaki Fujii
-
Patent number: 8022958Abstract: This disclosure describes techniques of loading batch commands into a graphics processing unit (GPU). As described herein, a GPU driver for the GPU identifies one or more graphics processing objects to be used by the GPU in order to render a batch of graphics primitives. The GPU driver may insert indexes associated with the identified graphics processing objects into a batch command. The GPU driver may then issue the batch command to the GPU. The GPU may use the indexes in the batch command to retrieve the graphics processing objects from memory. After retrieving the graphics processing objects from memory, the GPU may use the graphics processing objects to render the batch of graphics primitives.Type: GrantFiled: April 4, 2007Date of Patent: September 20, 2011Assignee: QUALCOMM IncorporatedInventors: Guofang Jiao, Lingjun Chen, Yun Du
-
Patent number: 8022960Abstract: Techniques for dynamically configuring a texture cache are disclosed. During a texture mapping process of a three-dimensional (3D) graphics pipeline, if the batch is for single texture mapping, the texture cache is configured as a n-way set-associative texture cache. However, if the batch is for multi-texture mapping the n-way set-associated texture cache is divided into at n/M-way set-associative sub-caches where n and M are integers greater than 1 and n is divisible by M.Type: GrantFiled: February 22, 2007Date of Patent: September 20, 2011Assignee: QUALCOMM IncorporatedInventor: Chun Yu
-
Patent number: 8018467Abstract: A method and apparatus which includes a graphics accelerator, circuitry responsive to pixel texture coordinates to select texels and generate therefrom a texture value for any pixel the color of which is to be modified by a texture, a cache to hold texels for use by the circuitry to generate texture value for any pixel, a stage for buffering the acquisition of texel data, and control circuitry for controlling the acquisition of texture data, storing the texture data in the cache, and furnishing the texture data for blending with pixel data.Type: GrantFiled: June 20, 2005Date of Patent: September 13, 2011Assignee: NVIDIA CorporationInventors: Gopal Solanki, Kioumars Kevin Dawallu
-
Patent number: 8018465Abstract: Methods for analyzing a list of routine identifiers to optimize processing of routines identified in the list. Some embodiments execute a set of routines in multiple passes where each pass comprises each routine in the set processing a single band of its source. The band size of the sources of the set is related to the size of a cache used during execution of the set. A band size of sources of the set is determined so that all data processed by and produced by any routine in the set can be stored to the cache while the routine processes one band of its source. Some embodiments use the list to combine two or more routines into a single routine where the list is modified accordingly. Some embodiments use the list for grouping and re-ordering routines identified in the list to send particular routines to an alternative processor for processing.Type: GrantFiled: March 31, 2009Date of Patent: September 13, 2011Assignee: Apple Inc.Inventors: Kenneth M. Carson, Randy Ubillos, Eric Graves
-
Patent number: 7999821Abstract: Circuits, methods, and apparatus that provide texture caches and related circuits that store and retrieve texels in an efficient manner. One such texture circuit can provide a configurable number of texel quads for a configurable number of pixels. For bilinear filtering, texels for a comparatively greater number of pixels can be retrieved. For trilinear filtering, texels in a first LOD are retrieved for a number of pixels during a first clock cycle, during a second clock cycle, texels in a second LOD are retrieved. When aniso filtering is needed, a greater number of texels can be retrieved for a comparatively lower number of pixels.Type: GrantFiled: December 19, 2007Date of Patent: August 16, 2011Assignee: NVIDIA CorporationInventor: Alexander L. Minkin
-
Patent number: 7996621Abstract: According to embodiments of the invention, a step value and a step-interval cache coherency protocol may be used to update and invalidate data stored within cache memory. A step value may be an integer value and may be stored within a cache directory entry associated with data in the memory cache. Upon reception of a cache read request, along with the normal address comparison to determine if the data is located within the cache a current step value may be compared with the stored step value to determine if the data is current. If the step values match, the data may be current and a cache hit may occur. However, if the step values do not match, the requested data may be provided from another source. Furthermore, an application may update the current step value to invalidate old data stored within the cache and associated with a different step value.Type: GrantFiled: July 12, 2007Date of Patent: August 9, 2011Assignee: International Business Machines CorporationInventors: Jeffrey Douglas Brown, Russell Dean Hoover, Eric Oliver Mejdrich, Kenneth Michael Valk
-
Publication number: 20110148895Abstract: A cache image including only cache entries with valid durations of at least a configured deployment date for a virtual machine image is prepared via an application server for the virtual machine image. The virtual machine image is deployed to at least one other application server as a virtual machine with the cache image including only the cache entries with the valid durations of at least the configured deployment date for the virtual machine image.Type: ApplicationFiled: December 18, 2009Publication date: June 23, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Erik J. Burckart, Andrew J. Ivory, Todd E. Kaplinger, Aaron K. Shook
-
Publication number: 20110142334Abstract: Systems, methods and computer-readable storage media are disclosed for accelerating bitmap remoting by extracting non-grid tiles from source bitmaps. A server takes a source image, identifies possibly repetitive features, and tiles the image. For each tile that contains part of a possibly repetitive feature, the server replaces that part with the dominant color of the tile. The system then sends to a client a combination of new tiles and features, and indications to tiles and features that the client has previously received and stored, along with an indication of how to recreate the image based on the tiles and features.Type: ApplicationFiled: December 11, 2009Publication date: June 16, 2011Applicant: Microsoft CorporationInventors: Nadim Y. Abdo, Voicu Anton Albu, Charles Lawrence Zitnick, III
-
Patent number: 7952588Abstract: Techniques are described for processing computerized images with a graphics processing unit (GPU) using an extended vertex cache. The techniques include creating an extended vertex cache coupled to a GPU pipeline to reduce an amount of data passing through the GPU pipeline. The GPU pipeline receives an image geometry for an image, and stores attributes for vertices within the image geometry in the extended vertex cache. The GPU pipeline only passes vertex coordinates that identify the vertices and vertex cache index values that indicate storage locations of the attributes for each of the vertices in the extended vertex cache to other processing stages along the GPU pipeline. The techniques described herein defer the setup of attribute gradients to just before attribute interpolation in the GPU pipeline. The vertex attributes may be retrieved from the extended vertex cache for attribute gradient setup just before attribute interpolation in the GPU pipeline.Type: GrantFiled: August 3, 2006Date of Patent: May 31, 2011Assignee: Qualcomm IncorporatedInventors: Guofang Jiao, Brian Evan Ruttenberg, Chun Yu, Yun Du
-
Patent number: 7952589Abstract: A data processing apparatus generates a memory address corresponding to a first memory, and interpolates data read out from the first memory. The data processing apparatus selects a part of the memory address, checks if the first memory stores data corresponding to the selected part of the memory address, and transfers the data, for which it is determined that the first memory does not store the data, and which corresponds to the part of the memory address, from a second memory to the first memory. The data processing apparatus determines to change a part to be selected of the memory address based on the checking result indicating that the first memory does not store the data corresponding to the selected part of the memory address, and changes the part of the memory address corresponding to the characteristics of the memory address.Type: GrantFiled: December 1, 2006Date of Patent: May 31, 2011Assignee: Canon Kabushiki KaishaInventor: Takayuki Tsutsumi
-
Patent number: 7948498Abstract: Circuits, methods, and apparatus that store a large number of texture states in an efficient manner. A level-one texture cache includes cache lines that are distributed throughout a texture pipeline, where each cache line stores a texture state. The cache lines can be updated by retrieving data from a second-level texture state cache, which in turn is updated from a frame buffer or graphics memory. The second-level texture state cache can prefetch texture states using a list of textures that are needed for a shader program or program portion.Type: GrantFiled: October 13, 2006Date of Patent: May 24, 2011Assignee: NVIDIA CorporationInventor: Alexander L. Minkin
-
Publication number: 20110096082Abstract: A memory access control device and method has a cache memory having a plurality of cache areas, each for storing image data of one macroblock, and a cache table having a plurality of table areas, corresponding to the plurality of cache areas, each for storing a validity flag indicating validity or an invalidity flag indicating invalidity of image data in a corresponding cache area and an in-frame address of image data of one macroblock stored in the corresponding cache area. A data request processor receives a data request including specification of an in-frame occupation region of requested image data from the image processor, determines target image data of at least one macroblock required to process the requested image data according to the in-frame occupation region of the requested image data, acquires the target image data from the cache memory, processes the requested image data using the acquired target image data, and outputs the processed image data to the image processor.Type: ApplicationFiled: October 27, 2010Publication date: April 28, 2011Applicant: YAMAHA CORPORATIONInventor: Noriyuki FUNAKUBO
-
Patent number: 7934054Abstract: A re-fetching cache memory improves efficiency of a system, for example by advantageously sharing the cache memory and/or by increasing performance. When some or all of the cache memory is temporarily used for another purpose, some or all of a data portion of the cache memory is flushed, and some or all of a tag portion is saved in an archive. In some embodiments, some or all of the tag portion operates “in-place” as the archive, and in further embodiments, is placed in a reduced-power mode. When the temporary use completes, optionally and/or selectively, at least some of the tag portion is repopulated from the archive, and the data portion is re-fetched according to the repopulated tag portion. According to various embodiments, processor access to the cache is enabled during one or more of: the saving; the repopulating; and the re-fetching.Type: GrantFiled: May 22, 2007Date of Patent: April 26, 2011Assignee: Oracle America, Inc.Inventors: Laurent R. Moll, Peter N. Glaskowsky, Joseph B. Rowlands
-
Patent number: 7928990Abstract: Techniques are described for processing computerized images with a graphics processing unit (GPU) using a unified vertex cache and shader register file. The techniques include creating a shared shader coupled to the GPU pipeline and a unified vertex cache and shader register file coupled to the shared shader to substantially eliminate data movement within the GPU pipeline. The GPU pipeline sends image geometry information based on an image geometry for an image to the shared shader. The shared shader performs vertex shading to generate vertex coordinates and attributes of vertices in the image. The shared shader then stores the vertex attributes in the unified vertex cache and shader register file, and sends only the vertex coordinates of the vertices back to the GPU pipeline. The GPU pipeline processes the image based on the vertex coordinates, and the shared shader processes the image based on the vertex attributes.Type: GrantFiled: September 27, 2006Date of Patent: April 19, 2011Assignee: Qualcomm IncorporatedInventors: Guofang Jiao, Chun Yu, Yun Du
-
Patent number: 7911474Abstract: A memory manager interfaces between a rendering application and the driver controlling one or more memories. A multi-level brick cache system caches bricks in a memory hierarchy to accelerate the rendering. One example memory hierarchy may include system memory, AGP memory, and graphics memory. The memory manager allows control of brick overwriting based on current or past rendering. Since different memories are typically available, one or more memory managers may control storage of bricks into different memories to optimize rendering. Management of different memory levels, overwriting based on current or previous rendering, and an interfacing memory manager may each be used alone or in any possible combination.Type: GrantFiled: February 28, 2007Date of Patent: March 22, 2011Assignee: Siemens Medical Solutions USA, Inc.Inventors: Wei Li, Gianluca Paladini
-
Patent number: 7908345Abstract: The access method comprises the following steps: selecting a first data item in a digital document designated by a predetermined identifier, said digital document comprising at least first and second data items linked to each other in a chosen hierarchical relationship; verifying the presence of at least one address of a location containing said second data item of the digital document in storage means of the client device; in the absence of said address in said storage means, seeking said address in the network; in the event of a positive search, storing said address in the storage means of the client device; and subsequently accessing said second data item of the document from the address thus stored by anticipation and thus immediately available locally.Type: GrantFiled: April 1, 2004Date of Patent: March 15, 2011Assignee: Canon Kabushiki KaishaInventors: Pascal Viger, Frédéric Mazé
-
Patent number: 7898551Abstract: Systems and methods for graphics data management are described. One embodiment includes a method for reducing bank collisions within a level 2 (L2) cache comprising the following: reading texture data from external memory configured to store texture data used for texture filtering within the graphics processing unit, partitioning the texture data into banks, performing a bank swizzle operation on the banks, and writing the banks of data to the L2 cache.Type: GrantFiled: June 19, 2007Date of Patent: March 1, 2011Assignee: Via Technologies, Inc.Inventors: Jim Xu, Wen Chen, Li Liang
-
Publication number: 20110043528Abstract: This is directed to managing a cache size for glyphs used to display text or other information in an electronic device. In particular, this is directed to defining a variable hit rate for retrieving glyphs loaded in cache to limit the number of times the device is required to read glyphs from storage. The hit rate can vary based on any suitable number or type of factors, including for example the characters previously displayed or to be displayed in the future, the system requirements for system memory, or any other suitable factor. In some embodiments, the hit rate can vary when characters in a second alphabet are displayed among or after characters in a first alphabet (e.g., Japanese characters in a listing of Latin characters).Type: ApplicationFiled: August 24, 2009Publication date: February 24, 2011Applicant: Apple Inc.Inventors: Dmitriy Solomonov, Michael Ingrassia, James Eric Mason
-
Patent number: 7889386Abstract: An image processing apparatus processes vector image data in units of blocks. When vector image data associated with a first block satisfies a predetermined condition, the image processing apparatus stores the result of processing the vector image data associated with the first block. When vector image data associated with a second block matches the vector image data associated with the first block, the image processing apparatus outputs the result of processing the vector image data associated with the first block, which is stored therein.Type: GrantFiled: January 23, 2007Date of Patent: February 15, 2011Assignee: Canon Kabushiki KaishaInventor: Waki Murakami
-
Patent number: 7877565Abstract: Systems and methods for using multiple versions of programmable constants within a multi-threaded processor allow a programmable constant to be changed before a program using the constants has completed execution. Processing performance may be improved since programs using different values for a programmable constant may execute simultaneously. The programmable constants are stored in a constant buffer and an entry of a constant buffer table is bound to the constant buffer. When a programmable constant is changed it is copied to an entry in a page pool and address translation for the page pool is updated to correspond to the old version (copy) of the programmable constant. An advantage is that the constant buffer stores the newest version of the programmable constant.Type: GrantFiled: January 31, 2006Date of Patent: January 25, 2011Assignee: NVIDIA CorporationInventors: Roger L. Allen, Cass W. Everitt, Henry Packard Moreton, Thomas H. Kong, Simon S. Moy
-
Patent number: 7868902Abstract: A system and method for a row forwarding of pixel data in a 3-D graphics pipeline. Specifically, in one embodiment a data write unit capable of row forwarding in a graphics pipeline includes a first memory and logic. The first memory stores a plurality of rows of pixel information associated with a pixel. The plurality of rows of pixel information includes data related to surface characteristics of the pixel and includes a first row, e.g., a front row, and a second row, e.g., a rear row. A data write unit includes first logic for accessing a portion of the second row and for storing data accessed therein into a portion of the first row. The data write unit also comprises logic for recirculating the plurality of rows of pixel information to an upstream pipeline module for further processing thereof.Type: GrantFiled: May 14, 2004Date of Patent: January 11, 2011Assignee: Nvidia CorporationInventors: Edward A. Hutchins, Paul Kim
-
Patent number: 7852341Abstract: A method and system for patching instructions in a 3-D graphics pipeline. Specifically, in one embodiment, instructions to be executed within a scheduling process for a shader pipeline of the 3-D graphics pipeline are patchable. A scheduler includes a decode table, an expansion table, and a resource table that are each patchable. The decode table translates high level instructions to an appropriate microcode sequence. The patchable expansion table expands a high level instruction to a program of microcode if the high level instruction is complex. The resource table assigns the units for executing the microcode. Addresses within each of the tables can be patched to modify existing instructions and create new instructions. That is, contents in each address in the tables that are tagged can be replaced with a patch value of a corresponding register.Type: GrantFiled: October 5, 2004Date of Patent: December 14, 2010Assignee: Nvidia CorporationInventors: Christian Rouet, Rui Bastos, Lordson Yue
-
Publication number: 20100302283Abstract: When a user carries out a specific operation at high speed in a conventional display device and the like, information stored in advance at a memory is not sufficient to display but it is necessary to acquire the information required for displaying from a server on a network, an external electronic apparatus, such as an HD, or an internal long term memory device of a display device every time it happens. Because of this, a user is kept waiting until desired information is displayed, so that it is hard to smoothly reflect a high speed operation on a graphic display. In order to solve the problem, the present invention proposes a display device characterized in caching graphic information that has the possibility to be used for display in the future in response to a user's operation history of graphics.Type: ApplicationFiled: April 18, 2008Publication date: December 2, 2010Applicant: Sharp Kabushiki KaishaInventors: Jun Sasaki, Hiroyuki Nakamura, Kenji Sakamoto, Satoshi Matsuyama, Ryusuke Watanabe, Akio Uemichi
-
Patent number: 7836258Abstract: According to embodiments of the invention, a distributed time base signal may be coupled to a memory directory which provides address translation for data located within a memory cache. The memory directory may have attribute bits which indicate whether or not the memory entries have been accessed by the distributed time base signal. Furthermore, the memory directory may have attribute bits which indicate whether or not a memory directory entry should be considered invalid after an access to the memory entry by the distributed time base signal. If the memory directory entry has been accessed by the distributed time base signal and the memory directory entry should be considered invalid after the access by the time base signal, any attempted address translation using the memory directory entry may cause a cache miss. The cache miss may initiate the retrieval of valid data from memory.Type: GrantFiled: November 13, 2006Date of Patent: November 16, 2010Assignee: International Business Machines CorporationInventors: Jeffrey Douglas Brown, Russell Dean Hoover, Eric Oliver Mejdrich
-
Patent number: 7834881Abstract: An apparatus and method for simulating a multi-ported memory using lower port count memories as banks. A collector units gather source operands from the banks as needed to process program instructions. The collector units also gather constants that are used as operands. When all of the source operands needed to process a program instruction have been gathered, a collector unit outputs the source operands to an execution unit while avoiding writeback conflicts to registers specified by the program instruction that may be accessed by other execution units.Type: GrantFiled: November 1, 2006Date of Patent: November 16, 2010Assignee: NVIDIA CorporationInventors: Samuel Liu, John Erik Lindholm, Ming Y Siu, Brett W. Coon, Stuart F. Oberman
-
Publication number: 20100274974Abstract: A system and method for replacing data in a cache utilizes cache block validity information, which contains information that indicates that data in a cache block is no longer needed for processing, to maintain least recently used information of cache blocks in a cache set of the cache, identifies the least recently used cache block of the cache set using the least recently used information of the cache blocks in the cache set, and replaces data in the least recently used cache block of the cache set with data from main memory.Type: ApplicationFiled: April 24, 2009Publication date: October 28, 2010Applicant: NXP B.V.Inventors: JAN-WILLEM VAN DE WAERDT, JOHAN GERARD WILLEM MARIA JANSSEN, MAURICE PENNERS
-
Patent number: 7817154Abstract: A graphics system has output states corresponding to a transformation of a user state of a software application to a graphics hardware state. The graphics system utilizes a technique, such as a conventional output state cache, to recognize that the overall state vector has taken on a previously-seen value. Additionally, a transition cache maps transitions in changing input state to changing output state. The transition cache is used to provide an alternative technique to determine output states based on transitions of input state.Type: GrantFiled: December 12, 2006Date of Patent: October 19, 2010Assignee: NVIDIA CorporationInventors: Rudy Jason Sams, Nicholas B. Carter
-
Publication number: 20100253694Abstract: An image processing apparatus that enables to reduce needless consumption of memory band and control duplicated access to a main memory. A reading unit reads image data stored in a first storage unit and divides the image data into a plurality of rectangular areas of a predetermined size. A second storage unit stores image data in reference areas surrounding the rectangular areas, the reference areas having overlapped areas each of which includes a boundary between adjacent two rectangular areas. An image processing unit performs an image process based on the image data in the rectangular areas read by the reading unit and the image data in the reference areas stored in the second storage unit. A cache control unit controls to transfer the image data in the reference areas from the second storage unit to the image processing unit in response to a request from the image processing unit.Type: ApplicationFiled: April 1, 2010Publication date: October 7, 2010Applicant: CANON KABUSHIKI KAISHAInventor: Minoru Kambegawa
-
Patent number: 7808506Abstract: An intelligent caching data structure and mechanisms for storing visual information via objects and data representing graphics information. The data structure is generally associated with mechanisms that intelligently control how the visual information therein is populated and used. The cache data structure can be traversed for direct rendering, or traversed for pre-processing the visual information into an instruction stream for another entity. Much of the data typically has no external reference to it, thereby enabling more of the information stored in the data structure to be processed to conserve resources. A transaction/batching-like model for updating the data structure enables external modifications to the data structure without interrupting reading from the data structure, and such that changes received are atomically implemented. A method and mechanism are provided to call back to an application program in order to create or re-create portions of the data structure as needed, to conserve resources.Type: GrantFiled: August 27, 2009Date of Patent: October 5, 2010Assignee: Microsoft CorporationInventors: Joseph S. Beda, Adam M. Smith, Gerhard A. Schneider, Kevin T. Gallo, Ashraf A. Michail
-
Patent number: 7802056Abstract: Techniques for management of drawing resources are described. In an implementation, a reference count numeral may be associated with a drawing resource stored in cache memory. One may be added to the reference count numeral each time a new drawing resource is added to memory. In addition, one may be removed from the reference count each time an existing drawing resource is removed from the memory. Also, the drawing resource may be maintained in the cache memory when the reference count numeral is greater than zero.Type: GrantFiled: July 2, 2007Date of Patent: September 21, 2010Assignee: Microsoft CorporationInventors: Seth M. Demsey, Tuan Huynh, Christopher W. Lorton
-
Patent number: 7796137Abstract: Disclosed are an apparatus, a system, a method, a graphics processing unit (“GPU”), a computer device, and a computer medium to implement a pool of independent enhanced tags to, among other things, decouple a dependency between tags and cachelines. In one embodiment, an enhanced tag-based cache structure includes a tag repository configured to maintain a pool of enhanced tags. Each enhanced tag can have a match portion configured to form an association between the enhanced tag and an incoming address. Also, an enhanced tag can have a data locator portion configured to locate a cacheline in the cache in response to the formation of the association. The data locator portion enables the enhanced tag to locate multiple cachelines. Advantageously, the enhanced tag-based cache structure can be formed to adjust the degree of reusability of the enhanced tags independent from the degree of latency tolerance for the cacheline repository.Type: GrantFiled: October 24, 2006Date of Patent: September 14, 2010Assignee: NVIDIA CorporationInventors: Dane T. Mrazek, Sameer M. Gauria, James C. Bowman
-
Patent number: 7788656Abstract: Disclosed is as system for reducing memory and computational requirements of graphics operations. The system provides techniques for combining otherwise individual operations to apply filters to images. The combined filter emerging from the combination spares the processor time and the creation of an entire intermediary image. The system further provides for application of these techniques in many contexts including where the operations are fragment programs in for a programmable GPU.Type: GrantFiled: December 15, 2005Date of Patent: August 31, 2010Assignee: Apple Inc.Inventor: John Harper
-
Patent number: 7786999Abstract: A method of manipulating a time based stream of information to create a presentation is provided in which a processing system is employed. The methods include the rendering of a requested modification, such as adding of an edit feature, to the information in forming the presentation. A simulation of the modification is displayed for the user to observe during the rendering process. A proxy of the information having the changes is generated and shown on a display screen. Other aspects of the present invention relating to the processing system displaying edit information for a time based stream of information for use in authoring a presentation are also described.Type: GrantFiled: October 4, 2000Date of Patent: August 31, 2010Assignee: Apple Inc.Inventor: Glenn Reid
-
Patent number: 7783827Abstract: The data processor enhances the bus throughput or data throughput of an external memory, when there are frequent continuous reads with a smaller data size than the data bus width of the external memory. The data processor includes a memory control unit being capable of controlling in response to a clock an external memory having plural banks that are individually independently controllable, plural buses connected to the memory control unit, and circuit modules capable of commanding memory accesses, which are provided in correspondence with each of the buses. The memory control unit contains bank caches each corresponding to the banks of the external memory.Type: GrantFiled: March 24, 2009Date of Patent: August 24, 2010Assignee: Renesas Technology Corp.Inventors: Fumie Katsuki, Takanobu Naruse, Chiaki Fujii
-
Patent number: 7782331Abstract: An exemplary method for performing a bit block transfer (bitblt) includes receiving one or more graphics parameters specifying the bitblt and generating a specialized bitblt function to perform the bitblt. The specialized bitblt function includes a one or more code blocks selected from a superset of code blocks based on the graphics parameters. A system includes a specialized bit block transfer (bitblt) function generator generating a specialized bitblt function to perform a specified bitblt. The specialized bitblt function includes intermediate language code corresponding to one or more graphics parameters specifying the bitblt. A translator translates the specialized bitblt function into machine-specific language code.Type: GrantFiled: June 24, 2004Date of Patent: August 24, 2010Assignee: Microsoft CorporationInventors: Jeffrey R Sirois, Joshua W Buckman, Kent D. Lottis
-
Patent number: 7768521Abstract: Disclosed herein is an image processing apparatus, including: first storage means for storing data in a unit of a word; second storage means for storing data in a unit of a word, address information for managing writing and reading out of the data of a unit of a word and a correction flag which indicates, in a unit of a word, whether or not it is necessary to correct the data, in an associated relationship with each other; and supplying means for reading out and supplying the data of a unit of a word, corresponding address information and a corresponding correction flag stored in the second storage means to the first storage means; the first storage means referring to the address information to correct the data of a unit of a word corresponding to the correction flag to the data of a unit of a word.Type: GrantFiled: March 13, 2007Date of Patent: August 3, 2010Assignee: Sony CorporationInventor: Takaaki Fuchie
-
Publication number: 20100188412Abstract: Providing content based cache for graphic resource management is disclosed herein. In some aspects, a portion of a shadow copy of graphics resources is updated from an original copy of the graphics resources when a requested resource is not current. The shadow copy may be dedicated to a graphics processing unit (GPU) while the original copy may be maintained by a central processing unit (CPU). In further aspects, the requested graphics resource in the shadow copy may be compared to a corresponding graphics resource in the original copy when the GPU requests the graphics resource. The comparison may be performed by comparing hashes of each graphics resource and/or by comparing at least a portion of the graphics resources.Type: ApplicationFiled: January 28, 2009Publication date: July 29, 2010Applicant: Microsoft CorporationInventors: Chen Li, Jinyu Li, Xin Tong, Barry C. Bond, Gang Chen
-
Patent number: 7760804Abstract: Image data is processed into first and second component pixel blocks, where each of the first blocks is associated with a respective one of the second blocks to define a combination pixel block. The first and second blocks are written to memory through a cache that is used as a write buffer. The cache is logically partitioned into a contiguous portion to store the first blocks and not any second blocks, and another contiguous portion to store the second blocks and not any first blocks. Other embodiments are also described and claimed.Type: GrantFiled: June 21, 2004Date of Patent: July 20, 2010Assignee: Intel CorporationInventors: Brian E. Ruttenberg, Prasoonkumar Surti
-
Patent number: 7746344Abstract: A renderer for performing stroke-based rendering determines whether two given overlapping strokes depict an occlusion in a three-dimensional scene. The renderer may then use this information to determine whether to apply an occlusion constraint between the strokes when rendering an image or a frame from an animation. In one implementation, the renderer determines whether the two strokes together depict a single view patch of surface in the scene (i.e., a single portion of three-dimensional surface in the scene as seen from the rendering viewpoint). The renderer builds an image-space patch of surface defined from the union of the two overlapping strokes and then determines whether there exists a single three-dimensional view patch of surface that projects onto the image-space patch and that contains both strokes' three-dimensional anchor points. Which stroke occludes the other can be determined by the relative three-dimensional depth of the strokes' anchor points from the rendering viewpoint.Type: GrantFiled: January 29, 2007Date of Patent: June 29, 2010Assignee: Auryn Inc.Inventors: Stephane Grabli, Robert Kalnins, Amitabh Agrawal, Nathan LeZotte
-
Publication number: 20100149202Abstract: A cache memory device includes a memory section configured to store image data of a frame with a predetermined size as one cache block, and an address conversion section configured to convert a memory address of the image data such that a plurality of different indices are assigned in units of the predetermined size in horizontal direction in the frame so as to generate address data, wherein the image data is output from the memory section as output data by specifying a tag, an index, and a block address based on the address data generated by the address conversion section through conversion.Type: ApplicationFiled: November 23, 2009Publication date: June 17, 2010Applicant: KABUSHIKI KAISHA TOSHIBAInventor: Kentaro Yoshikawa
-
Patent number: 7737985Abstract: Apparatus are provided including device memory, hardware entities, a sub-image cell value cache, and a cache write operator. At least some of the hardware entities perform actions involving access to and use of the device memory. The hardware entities include 3D graphics circuitry to process, for ready display, 3D images from primitive objects. The cache is separate from the device memory, and is provided to hold data, including buffered sub-image cell values. The cache is connected to the 3D graphics circuitry so that pixel processing portions of the 3D graphics circuitry access the buffered sub-image cell values in the cache, in lieu of the pixel processing portions directly accessing the sub-image cell values in the device memory. The write operator writes the buffered sub-image cell values to the device memory under direction of a priority scheme. The priority scheme preserves in the cache border cell values bordering one or more primitive objects.Type: GrantFiled: January 8, 2007Date of Patent: June 15, 2010Assignee: QUALCOMM IncorporatedInventors: William Torzewski, Chun Yu, Alexei V. Bourd
-
Patent number: 7727071Abstract: A centralized gaming system comprises a central server system and a plurality of display terminals remote from and linked to the central server system. The central server system includes a master game server, a game execution server, and a database server. The master game server stores a plurality of games of chance. Each game includes respective game play software and respective audiovisual software. In response to one of the games being selected for play at one of the display terminals, the game play software for the selected game is loaded from the master game server into the game execution server and is executed by the game execution server to randomly select an outcome. The audiovisual software for the selected game is selectively executed at the display terminal to visually represent the outcome on a display of the display terminal. The database server collects game activity data based on the outcome and maintains such data for report generation and player tracking purposes.Type: GrantFiled: June 19, 2007Date of Patent: June 1, 2010Assignee: WMS Gaming Inc.Inventor: John J. Giobbi
-
Patent number: 7724263Abstract: A system and method for a data write unit in a 3-D graphics pipeline including generic cache memories. Specifically, in one embodiment a data write unit includes a first memory, a plurality of cache memories and a data write circuit. The first memory receives a pixel packet associated with a pixel. The pixel packet includes data related to surface characteristics of the pixel. The plurality of cache memories is coupled to the first memory for storing pixel information associated with a plurality of surface characteristics of a plurality of pixels. Each of the plurality of cache memories is programmably associated with a designated surface characteristic. The data write circuit is coupled to the first a memory and the plurality of cache memories. The data write circuit is operable under program control to obtain designated portions of the pixel packet for storage into the plurality of cache memories.Type: GrantFiled: May 14, 2004Date of Patent: May 25, 2010Assignee: Nvidia CorporationInventors: Edward A. Hutchins, Paul Kim, Brian K. Angell