Patents Assigned to NVidia
-
Patent number: 10049646Abstract: A method for switching, including initializing an instantiation of an application and performing graphics rendering to generate a plurality of rendered frames through execution of the application in order to generate a first video stream comprising the plurality of rendered frames. The method includes sequentially loading the plurality of rendered frames into one or more frame buffers, and determining when a first bitmap of a frame that is loaded into a corresponding frame buffer matches an application signature comprising a derivative of a master bitmap associated with a keyframe of the first video stream.Type: GrantFiled: December 20, 2013Date of Patent: August 14, 2018Assignee: Nvidia CorporationInventors: Franck Diard, Matt Lavoie
-
Patent number: 10049006Abstract: A method for updating a DRAM memory array is disclosed. The method comprises: a) transitioning the DRAM memory array from an idle state to a refresh state in accordance with a command from a memory controller; b) initiating a refresh on the DRAM memory array using DRAM internal control circuitry by activating a row of data into an associated sense amplifier buffer; and c) during the refresh, performing an ERR Correction Code (ECC) scrub operation of selected bits in the activated row of the DRAM memory array.Type: GrantFiled: December 8, 2015Date of Patent: August 14, 2018Assignee: Nvidia CorporationInventors: David Reed, Alok Gupta
-
Patent number: 10043230Abstract: Computer and graphics processing elements, connected generally in series, form a pipeline. Circuit elements known as di/dt throttles are inserted within the pipeline at strategic locations where the potential exists for data flow to transition from an idle state to a maximum data processing rate. The di/dt throttles gently ramp the rate of data flow from idle to a typical level. Disproportionate current draw and the consequent voltage droop are thus avoided, allowing an increased frequency of operation to be realized.Type: GrantFiled: September 20, 2013Date of Patent: August 7, 2018Assignee: NVIDIA CORPORATIONInventors: Philip Payman Shirvani, Peter Sommers, Eric T. Anderson
-
Patent number: 10042469Abstract: A method for reducing line display latency on a touchpad device is disclosed. The method comprises storing information regarding a plurality of prior touch events on a touch screen of the touchpad device into an event buffer. It further comprises determining an average speed and a predicted direction of motion of a user interaction with the touch screen using the plurality of prior touch events. Next, it comprises calculating a first prediction point using the average speed, the predicted direction, and a last known touch event on the touch screen. Subsequently, it comprises applying weighted filtering on the first prediction point using a measured line curvature to determine a second prediction point. Finally, it comprises rendering a prediction line between the last known touch event on the touch screen and the second prediction point.Type: GrantFiled: November 30, 2016Date of Patent: August 7, 2018Assignee: Nvidia CorporationInventors: Bojan Skaljak, Arman Toorians, Michael Chu
-
Patent number: 10043234Abstract: A system and method for decompressing compressed data (e.g., in a frame buffer) and optionally recompressing the data. The method includes determining a portion of an image to be accessed from a memory and sending a conditional read corresponding to the portion of the image. In response to the conditional read, an indicator operable to indicate that the portion of the image is uncompressed may be received. If the portion of the image is compressed, in response to the conditional read, compressed data corresponding to the portion of the image is received. In response to receiving the compressed data, the compressed data is uncompressed into uncompressed data. The uncompressed data may then be written to the memory corresponding to the portion of the image. The uncompressed data may then be in-place compressed for or during subsequent processing.Type: GrantFiled: December 31, 2012Date of Patent: August 7, 2018Assignee: NVIDIA CorporationInventors: Jonathan Dunaisky, Steven E. Molnar, Christian Amsinck, Rui Bastos, Eric B. Lum, Justin Cobb, Emmett Kilgariff
-
Patent number: 10043457Abstract: A display for a computer system, such as an LCD, is configured to consume less power when compared to conventional designs. The display includes a screen and at least one backlight configured to illuminate the screen. An input to the at least one backlight is adjustable to produce a desired level of brightness. The input may be computed based on a generated source image and a defined constraint. An input to the display is computed based on the input to the at least one backlight and the source image. The input to the display modifies the level of brightness provided by the at least one backlight to produce a viewable image.Type: GrantFiled: February 18, 2009Date of Patent: August 7, 2018Assignee: NVIDIA CORPORATIONInventor: David B. Kirk
-
Patent number: 10037228Abstract: A technique for simultaneously executing multiple tasks, each having an independent virtual address space, involves assigning an address space identifier (ASID) to each task and constructing each virtual memory access request to include both a virtual address and the ASID. During virtual to physical address translation, the ASID selects a corresponding page table, which includes virtual to physical address mappings for the ASID and associated task. Entries for a translation look-aside buffer (TLB) include both the virtual address and ASID to complete each mapping to a physical address. Deep scheduling of tasks sharing a virtual address space may be implemented to improve cache affinity for both TLB and data caches.Type: GrantFiled: October 25, 2012Date of Patent: July 31, 2018Assignee: NVIDIA CORPORATIONInventors: Nick Barrow-Williams, Brian Fahs, Jerome F. Duluk, Jr., James Leroy Deming, Timothy John Purcell, Lucien Dunning, Mark Hairgrove
-
Patent number: 10038430Abstract: A system for controlling gatings of a multi-core processor, the system includes a pulse width modulation generator for generating a control square wave; and a phase shifter for shifting a phase of the control square wave to generate control square waves with different phases, and respectively inputting the control square waves with the different phases to a gating of each of multiple processing engines in the multi-core processor. A multi-core processor is provided that includes multiple processing engines. Each processing engine includes a gating, and a system for controlling the gating. Accordingly, in the multi-core processor, the load to be processed in a certain period of a working cycle can be averaged to be processed in a longer period of the working cycle. Consequently, current noise and voltage noise and temperature growth due to the load change can be reduced.Type: GrantFiled: January 22, 2013Date of Patent: July 31, 2018Assignee: NVIDIA CORPORATIONInventor: Shuang Xu
-
Patent number: 10037620Abstract: One embodiment of the present invention includes a method for rendering a geometry object in a computer-generated scene. A screen space associated with a display screen is divided into a set of regions. For each region; a first sampling factor in a horizontal dimension is computed that represents a horizontal sampling factor for pixels located in the region, a second sampling factor in a vertical dimension is computed that represents a vertical sampling factor for the pixels located in the region, a first offset in the horizontal dimension is computed that represents a horizontal position associated with the region, and a second offset in the vertical dimension is computed that represent a vertical position associated with the region. When the geometry object is determined to intersect more than one region, an instance of the geometry object is generated each region that the geometry object intersects.Type: GrantFiled: May 29, 2015Date of Patent: July 31, 2018Assignee: NVIDIA CORPORATIONInventors: Eric B. Lum, Justin Cobb
-
Patent number: 10032692Abstract: Various embodiments relating to semiconductor package structures having reduced thickness while maintaining rigidity are provided. In one embodiment, a semiconductor package structure includes a substrate including a surface, a semiconductor die including a first interface surface connected to the surface of the substrate and a second interface surface opposing the first interface surface, a mold compound applied to the substrate surrounding the semiconductor die. The second interface surface of the semiconductor die is exposed from the mold compound. The semiconductor package structure includes a heat dissipation cover attached to the second interface surface of the semiconductor die and the mold compound.Type: GrantFiled: March 12, 2013Date of Patent: July 24, 2018Assignee: Nvidia CorporationInventors: Shantanu Kalchuri, Brian Schieck, Abraham Yee
-
Patent number: 10032246Abstract: A texture processing pipeline is configured to store decoded texture data within a cache unit in order to expedite the processing of texture requests. When a texture request is processed, the texture processing pipeline queries the cache unit to determine whether the requested data is resident in the cache. If the data is not resident in the cache unit, a cache miss occurs. The texture processing pipeline then reads encoded texture data from global memory, decodes that data, and writes different portions of the decoded memory into the cache unit at specific locations according to a caching map. If the data is, in fact, resident in the cache unit, a cache hit occurs, and the texture processing pipeline then reads decoded portions of the requested texture data from the cache unit and combines those portions according to the caching map.Type: GrantFiled: October 9, 2013Date of Patent: July 24, 2018Assignee: NVIDIA CORPORATIONInventors: Eric T. Anderson, Poornachandra Rao
-
Patent number: 10032242Abstract: A method for managing bind-render-target commands in a tile-based architecture. The method includes receiving a requested set of bound render targets and a draw command. The method also includes, upon receiving the draw command, determining whether a current set of bound render targets includes each of the render targets identified in the requested set. The method further includes, if the current set does not include each render target identified in the requested set, then issuing a flush-tiling-unit-command to a parallel processing subsystem, modifying the current set to include each render target identified in the requested set, and issuing bind-render-target commands identifying the requested set to the tile-based architecture for processing. The method further includes, if the current set of render targets includes each render target identified in the requested set, then not issuing the flush-tiling-unit-command.Type: GrantFiled: October 1, 2013Date of Patent: July 24, 2018Assignee: NVIDIA CORPORATIONInventors: Ziyad S. Hakura, Jeffrey A. Bolz, Amanpreet Grewal, Matthew Johnson, Andrei Khodakovsky
-
Patent number: 10032245Abstract: A tile coalescer within a graphics processing pipeline coalesces coverage data into tiles. The coverage data indicates, for a set of XY positions, whether a graphics primitive covers those XY positions. The tile indicates, for a larger set of XY positions, whether one or more graphics primitives cover those XY positions. The tile coalescer includes coverage data in the tile only once for each XY position, thereby allowing the API ordering of the graphics primitives covering each XY position to be preserved. The tile is then distributed to a set of streaming multiprocessors for shading and blending operations. The different streaming multiprocessors execute thread groups to process the tile. In doing so, those thread groups may perform read-modify-write operations with data stored in memory. Each such thread group is scheduled to execute via atomic operations, and according to the API order of the associated graphics primitives.Type: GrantFiled: October 27, 2015Date of Patent: July 24, 2018Assignee: NVIDIA CORPORATIONInventors: Ziyad Hakura, Eric Lum, Dale Kirkland, Jack Choquette, Patrick R. Brown, Yury Y. Uralsky, Jeffrey Bolz
-
Patent number: 10032710Abstract: An integrated circuit (IC) system includes an IC coupled to a package. The package, in turn, is coupled to a ball grid array. The integrated circuit is electrically coupled to the ball grid array by a plurality of package through-hole (PTH) vias that penetrate through the package. Each PTH via includes a conductive element associated with a differential signaling pair. The conductive elements within a given differential signaling pair are disposed in the package at specific locations, relative to other conductive elements in other differential signaling pairs, to reduce crosstalk between those differential signaling pairs. At least one advantage of technique described herein is that the conductive elements within the package can be densely packed together without inducing excessive crosstalk. Therefore, the package can support a large number of differential signaling pairs, allowing high-throughput data communication.Type: GrantFiled: July 23, 2015Date of Patent: July 24, 2018Assignee: NVIDIA CORPORATIONInventors: Seunghyun Hwang, Vishnu Balan, Sunil Rao Sudhakaran
-
Patent number: 10034007Abstract: Techniques for non-subsampled video encoding of R?G?B? data using Y?, Cb and Cr data to generate compressed data wherein the Y?-plane comprises three separate color frames that are not interleaved, and recovering the data therefrom.Type: GrantFiled: May 20, 2011Date of Patent: July 24, 2018Assignee: NVIDIA CorporationInventors: Olivier Lapicque, Franck Diard
-
Patent number: 10032696Abstract: A microelectronic package includes an interposer with through-silicon vias that is formed from a semiconductor substrate and one or more semiconductor dies coupled to the interposer. A first signal redistribution layer formed on the first side of the interposer electrically couples the one or more semiconductor dies to the through-silicon vias. A second redistribution layer is formed on a second side of the interposer, and is electrically coupled to the through-silicon vias. In some embodiments, a mold compound is connected to an edge surface of the interposer and is configured to stiffen the microelectronic package.Type: GrantFiled: December 21, 2012Date of Patent: July 24, 2018Assignee: NVIDIA CORPORATIONInventor: Teckgyu Kang
-
Patent number: 10032243Abstract: One embodiment of the present invention sets forth a graphics subsystem configured to implement distributed cache tiling. The graphics subsystem includes one or more world-space pipelines, one or more screen-space pipelines, one or more tiling units, and a crossbar unit. Each world-space pipeline is implemented in a different processing entity and is coupled to a different tiling unit. Each screen-space pipeline is implemented in a different processing entity and is coupled to the crossbar unit. The tiling units are configured to receive primitives from the world-space pipelines, generate cache tile batches based on the primitives, and transmit the primitives to the screen-space pipelines. One advantage of the disclosed approach is that primitives are processed in application-programming-interface order in a highly parallel tiling architecture. Another advantage is that primitives are processed in cache tile order, which reduces memory bandwidth consumption and improves cache memory utilization.Type: GrantFiled: October 18, 2013Date of Patent: July 24, 2018Assignee: NVIDIA CORPORATIONInventors: Ziyad S. Hakura, Cynthia Ann Edgeworth Allison, Dale L. Kirkland, Walter R. Steiner
-
Patent number: 10031856Abstract: A system for managing virtual memory. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping.Type: GrantFiled: October 16, 2013Date of Patent: July 24, 2018Assignee: NVIDIA CORPORATIONInventors: Jerome F. Duluk, Jr., Chenghuan Jia, John Mashey, Cameron Buschardt, Sherry Cheung, James Leroy Deming, Samuel H. Duncan, Lucien Dunning, Robert George, Arvind Gopalakrishnan, Mark Hairgrove
-
Patent number: 10032289Abstract: A system, method, and computer program product for implementing a tree traversal operation for a tree data structure is disclosed. The method includes the steps of receiving at least a portion of a tree data structure that represents a tree having a plurality of nodes and processing, via a tree traversal operation algorithm executed by a processor, one or more nodes of the tree data structure by intersecting the one or more nodes of the tree data structure with a query data structure. A first node of the tree data structure is associated with a first local coordinate system and a second node of the tree data structure is associated with a second local coordinate system, the first node being an ancestor of the second node, and the first local coordinate system and the second local coordinate system are both specified relative to a global coordinate system.Type: GrantFiled: December 13, 2016Date of Patent: July 24, 2018Assignee: NVIDIA CorporationInventors: Samuli Matias Laine, Timo Oskari Aila, Tero Tapani Karras
-
Patent number: 10025643Abstract: A system and method for compiling source code (e.g., with a compiler). The method includes accessing a portion of device source code and determining whether the portion of the device source code comprises a piece of work to be launched on a device from the device. The method further includes determining a plurality of application programming interface (API) calls based on the piece of work to be launched on the device and generating compiled code based on the plurality of API calls. The compiled code comprises a first portion operable to execute on a central processing unit (CPU) and a second portion operable to execute on the device (e.g., GPU).Type: GrantFiled: January 7, 2013Date of Patent: July 17, 2018Assignee: Nvidia CorporationInventors: Vinod Grover, Jaydeep Marathe, Sean Lee