Patents Assigned to NVidia
  • Patent number: 8595437
    Abstract: One embodiment of the present invention sets forth a compression status bit cache with deterministic latency for isochronous memory clients of compressed memory. The compression status bit cache improves overall memory system performance by providing on-chip availability of compression status bits that are used to size and interpret a memory access request to compressed memory. To avoid non-deterministic latency when an isochronous memory client accesses the compression status bit cache, two design features are employed. The first design feature involves bypassing any intermediate cache when the compression status bit cache reads a new cache line in response to a cache read miss, thereby eliminating additional, potentially non-deterministic latencies outside the scope of the compression status bit cache.
    Type: Grant
    Filed: November 21, 2008
    Date of Patent: November 26, 2013
    Assignee: Nvidia Corporation
    Inventors: David B. Glasco, Peter B. Holmqvist, George R. Lynch, Patrick R. Marchand, Karan Mehra, James Roberts
  • Patent number: 8595394
    Abstract: A method for dynamic buffering of disk I/O command chains for a computer system. The method includes receiving a plurality of disk I/O command chains from at least one thread executing on a processor of the computer system. A respective plurality of pointers for the disk I/O command chains are stored in a buffer of a disk controller. The disk I/O command chains are accessed for execution by the disk controller by serially accessing the pointers in the buffer.
    Type: Grant
    Filed: December 1, 2003
    Date of Patent: November 26, 2013
    Assignee: Nvidia Corporation
    Inventors: Radoslav Danilak, Krishnaraj S. Rao
  • Patent number: 8595425
    Abstract: One embodiment of the present invention sets forth a technique for providing a L1 cache that is a central storage resource. The L1 cache services multiple clients with diverse latency and bandwidth requirements. The L1 cache may be reconfigured to create multiple storage spaces enabling the L1 cache may replace dedicated buffers, caches, and FIFOs in previous architectures. A “direct mapped” storage region that is configured within the L1 cache may replace dedicated buffers, FIFOs, and interface paths, allowing clients of the L1 cache to exchange attribute and primitive data. The direct mapped storage region may used as a global register file. A “local and global cache” storage region configured within the L1 cache may be used to support load/store memory requests to multiple spaces. These spaces include global, local, and call-return stack (CRS) memory.
    Type: Grant
    Filed: September 25, 2009
    Date of Patent: November 26, 2013
    Assignee: NVIDIA Corporation
    Inventors: Alexander L. Minkin, Steven James Heinrich, RaJeshwaran Selvanesan, Brett W. Coon, Charles McCarver, Anjana Rajendran, Stewart G. Carlton
  • Patent number: 8593472
    Abstract: One embodiment of the invention sets forth a mechanism for retrieving and storing data from/to a frame buffer via a storage driver included in a GPU driver. The storage driver includes three separate routines, the registration engine, the page-fault routine and the write-back routine, that facilitate the transfer of data between the frame buffer and the system memory. The registration engine registers a file system, corresponding to the frame buffer, the page-fault routine and the write-back routine with the VMM. The page-fault routine causes a portion of data stored in a specific memory location in the frame buffer to be transmitted to a corresponding memory location in the application memory. The write-back routine causes data stored in a particular memory location in the application memory to be transmitted to a corresponding memory location in the frame buffer.
    Type: Grant
    Filed: July 31, 2009
    Date of Patent: November 26, 2013
    Assignee: Nvidia Corporation
    Inventor: Franck Diard
  • Patent number: 8593469
    Abstract: In some embodiments, a video processing system including video processor, an external memory, and an integrated circuit that implements both a memory controller (having embedded intelligence) and an internal memory coupled to the memory controller. The memory controller is configured to pre-cache in the internal memory partial frames of reference video data in the external memory (e.g., N-line slices of M-line reference frames, where M>N), and to respond to requests (e.g., from the video processor) for blocks of reference video data including by determining whether each requested block (or each of at least two portions thereof) has been pre-cached in the internal memory, causing each requested cached block (or portion thereof) to be read from the internal memory, and causing each requested non-cached block (or portion thereof) to be read from the external memory.
    Type: Grant
    Filed: March 29, 2006
    Date of Patent: November 26, 2013
    Assignee: Nvidia Corporation
    Inventors: Parthasarathy Sriram, Han Chou
  • Publication number: 20130310036
    Abstract: A mobile terminal comprising: transceiver apparatus for accessing a wireless network using an earlier and a later generation radio access technology, to establish a voice channel and packet data channel; and an inter radio access technology selector configured to monitor a condition for disabling the earlier generation access, being a condition other than coverage under the earlier generation technology falling below an acceptable lower level. The selector makes inter radio access technology decisions dynamically from the mobile terminal by updating registration with the network to indicate that the earlier generation technology is no longer supported. The selector thereby prevents the mobile terminal being subject to decisions from the network that would otherwise impose transfer to the earlier generation. At least some of the decisions made from the mobile terminal thus disable the earlier generation access whilst in presence of at least the lower level of coverage under the earlier generation.
    Type: Application
    Filed: December 20, 2012
    Publication date: November 21, 2013
    Applicant: NVIDIA CORPORATION
    Inventors: Steve Molloy, Stephen A. Allpress, Mathieu Imbault
  • Publication number: 20130311797
    Abstract: A system and method for power management by providing advance notice of events. The method includes snooping a register of an operating system timer to determine a timer period associated with a scheduled event. A unit of a computer system is identified that is in a low power state. A wake up latency of the unit is determined that is based on the low power state. An advance period is determined based on the wake up latency. An advance notice of the operating system timer is triggered based on the timer period and the advance period to wake up the unit.
    Type: Application
    Filed: May 16, 2012
    Publication date: November 21, 2013
    Applicant: NVIDIA CORPORATION
    Inventors: Sagheer Ahmad, Jay Kishora Gupta, Laurent Rene Moll
  • Publication number: 20130311303
    Abstract: Embodiments of the invention may include receiving a plurality of bids, wherein each bid corresponds to an advertisement placement opportunity in a plurality of video frames generated by a graphics processing system in response to instructions from a software application. In addition, a winning bid may be determined from the plurality of bids by evaluating the plurality of bids. Further, an advertisement corresponding to the winning bid may be provided to the graphics processing system, wherein the graphics processing system is operable to include the advertisement in the plurality of video frames for display.
    Type: Application
    Filed: December 21, 2012
    Publication date: November 21, 2013
    Applicant: NVIDIA Corporation
    Inventor: Jen-Hsun Huang
  • Publication number: 20130311308
    Abstract: Embodiments of the invention may include, in response to instructions from a software application, generating a plurality of video frames that are operable to represent a three dimensional virtual space, wherein the three dimensional virtual space comprises a shape having a surface. In addition, a supplemental texture may be applied to the surface of the shape, wherein the supplemental texture is different from an original texture instructed to be applied to the surface by the software application. Further, the plurality of video frames may be displayed, wherein the supplemental texture is rendered visible.
    Type: Application
    Filed: December 21, 2012
    Publication date: November 21, 2013
    Applicant: NVIDIA CORPORATION
    Inventor: Jen-Hsun Huang
  • Publication number: 20130311752
    Abstract: A processing system comprising a microprocessor core and a translator. Within the microprocessor core is arranged a hardware decoder configured to selectively decode instructions for execution in the microprocessor core, and, a logic structure configured to track usage of the hardware decoder. The translator is operatively coupled to the logic structure and configured to selectively translate the instructions for execution in the microprocessor core, based on the usage of the hardware decoder as determined by the logic structure.
    Type: Application
    Filed: May 18, 2012
    Publication date: November 21, 2013
    Applicant: NVIDIA CORPORATION
    Inventors: Rupert Brauch, Madhu Swarna, Ross Segelken, David Dunn, Ben Hertzberg
  • Publication number: 20130311307
    Abstract: Embodiments of the invention may include, in response to instructions from a software application, generating a plurality of video frames that are operable to represent a three dimensional virtual space. In addition, a supplemental image may be included in at least one of the plurality of video frames instead of an original image instructed to be included in the plurality of video frames by the software application. Further, the plurality of video frames may be displayed, wherein the supplemental image is rendered visible.
    Type: Application
    Filed: December 21, 2012
    Publication date: November 21, 2013
    Applicant: NVIDIA CORPORATION
    Inventor: Jen-Hsun Huang
  • Publication number: 20130311548
    Abstract: User inputs are received from end user devices. The user inputs are associated with applications executing in parallel on a computer system. Responsive to the user inputs, data is generated using a graphics processing unit (GPU) configured as multiple virtual GPUs that are concurrently utilized by the applications. The data is then directed to the proper end user devices for display.
    Type: Application
    Filed: December 26, 2012
    Publication date: November 21, 2013
    Applicant: NVIDIA CORPORATION
    Inventors: Jen-Hsun Huang, Franck R. Diard, Andrew Currid
  • Publication number: 20130307999
    Abstract: The disclosure provides a digital camera. The digital camera includes an image sensor configured to produce image sensor data. The digital camera further includes (i) an image signal processor configured to receive and perform a plurality of on-camera processing operations on the image sensor data, where such processing operations yield a plurality of intermediate processed versions of the image sensor data, and (ii) a communication module configured to wirelessly transmit, to an off-camera image signal processor, image source data which includes at least one of: (a) the image sensor data and (b) one of the intermediate processed versions of the image sensor data, where such transmission is performed automatically in response to the producing the image sensor data.
    Type: Application
    Filed: May 1, 2013
    Publication date: November 21, 2013
    Applicant: NVIDIA Corporation
    Inventor: Ricardo J. Motta
  • Publication number: 20130311268
    Abstract: Embodiments of the invention may include generating a plurality of video frames that are operable to represent a world space, wherein the world space includes an advertisement. In addition, a visibility characteristic of the advertisement in the plurality of video frames may be determined. Further, the visibility characteristic may be communicated to an advertisement engine.
    Type: Application
    Filed: December 21, 2012
    Publication date: November 21, 2013
    Applicant: NVIDIA CORPORATION
    Inventor: Jen-Hsun Huang
  • Publication number: 20130308871
    Abstract: A method for encoding at least one extra bit in an image compression and decompression system. The method includes accessing an input image, and compressing the input image into a compressed image using an encoder system, wherein said encoding system implements an algorithm for encoding at least one extra bit. The method further includes communicatively transferring the compressed image to a decoding system, and decompressing the compressed image into a resulting uncompressed image that is unaltered from said input image, wherein the algorithm for encoding enables the recovery of the at least one extra bit.
    Type: Application
    Filed: December 27, 2012
    Publication date: November 21, 2013
    Applicant: NVIDIA CORPORATION
    Inventor: Walter E. Donovan
  • Patent number: 8588305
    Abstract: The present invention provides an apparatus for interpolation which is able to process input data with multiple video standards without sacrificing chip area. The interpolation unit comprises: a first interpolation unit for interpolating input data; a second interpolation unit for interpolating input data; a filter indicator for providing information to the first interpolation unit and the second interpolation unit; and an output unit for multiplexing and averaging output from the first interpolation unit and the second interpolation unit. The present invention also provides a motion compensation unit and a decoder for processing multiple video standards.
    Type: Grant
    Filed: June 20, 2008
    Date of Patent: November 19, 2013
    Assignee: Nvidia Corporation
    Inventors: Yong Peng, Zheng Wei Jiang, Frans Sijstermans, Stefan Eckart
  • Patent number: 8588542
    Abstract: An image processing apparatus for processing pixels is disclosed. The image processing apparatus comprises one or more functional blocks adapted to perform a corresponding functional task on the pixels. Further, the image processing apparatus includes one or more line-delay elements for delaying a horizontal scan line of the pixels. A desired processing task, which includes at least one functional task, is performed by configuring each functional block based on an actual number of the line-delay elements used for performing the desired processing task. Each functional block used for performing the desired processing task receives a group of pixels for processing from one or more horizontal scan lines such that the group overlaps another group of pixels for processing from one or more horizontal scan lines by another functional block.
    Type: Grant
    Filed: December 13, 2005
    Date of Patent: November 19, 2013
    Assignee: NVIDIA Corporation
    Inventors: Sohei Takemoto, Shang-Hung Lin
  • Patent number: 8589468
    Abstract: The present invention enables efficient matrix multiplication operations on parallel processing devices. One embodiment is a method for mapping CTAs to result matrix tiles for matrix multiplication operations. Another embodiment is a second method for mapping CTAs to result tiles. Yet other embodiments are methods for mapping the individual threads of a CTA to the elements of a tile for result tile computations, source tile copy operations, and source tile copy and transpose operations. The present invention advantageously enables result matrix elements to be computed on a tile-by-tile basis using multiple CTAs executing concurrently on different streaming multiprocessors, enables source tiles to be copied to local memory to reduce the number accesses from the global memory when computing a result tile, and enables coalesced read operations from the global memory as well as write operations to the local memory without bank conflicts.
    Type: Grant
    Filed: September 3, 2010
    Date of Patent: November 19, 2013
    Assignee: NVIDIA Corporation
    Inventors: Norbert Juffa, Radoslav Danilak
  • Patent number: 8587581
    Abstract: One embodiment of the present invention sets forth a technique for rendering graphics primitives in parallel while maintaining the API primitive ordering. Multiple, independent geometry units perform geometry processing concurrently on different graphics primitives. A primitive distribution scheme delivers primitives concurrently to multiple rasterizers at rates of multiple primitives per clock while maintaining the primitive ordering for each pixel. The multiple, independent rasterizer units perform rasterization concurrently on one or more graphics primitives, enabling the rendering of multiple primitives per system clock.
    Type: Grant
    Filed: October 15, 2009
    Date of Patent: November 19, 2013
    Assignee: Nvidia Corporation
    Inventors: Steven E. Molnar, Emmett M. Kilgariff, Johnny S. Rhoades, Timothy John Purcell, Sean J. Treichler, Ziyad S. Hakura, Franklin C. Crow, James C. Bowman
  • Patent number: 8587682
    Abstract: A display system, method, and computer program product are provided for capturing images using multiple integrated image sensors. The display system includes a front panel for displaying an image. The display system further includes a matrix of image sensors situated behind the front panel.
    Type: Grant
    Filed: February 12, 2011
    Date of Patent: November 19, 2013
    Assignee: NVIDIA Corporation
    Inventor: Ricardo J. Motta