Abstract: A computer system and a method of operating a service-processor-centric computer system. In one embodiment, the computer system includes: (1) a CPU configured to issue control signals and (2) a service processor configured for intercepting and handling the control signals, the handling including delaying, modifying or ignoring the control signals, the service processor further configuring for issuing highest-priority control signals.
Abstract: One embodiment of the present invention sets forth a mechanism for transmitting and receiving differential signals. A transmitter combines a direct current (DC) to DC converter including a capacitor with a 2:1 multiplexer to drive a pair of differential signaling lines. The transmitter drives a pair of voltages that are symmetric about the ground power supply level. Signaling currents are returned to the ground plane to minimize the generation of noise that is a source of crosstalk between different differential signaling pairs. Noise introduced through the power supply is correlated with the switching rate of the data and may be reduced using an equalizer circuit.
Type:
Grant
Filed:
January 30, 2012
Date of Patent:
May 10, 2016
Assignee:
NVIDIA Corporation
Inventors:
John W. Poulton, Thomas Hastings Greer, III, William J. Dally
Abstract: One embodiment of the present invention includes a method for performing a multi-pass tiling test. The method includes combining a plurality of bounding boxes to generate a coarse bounding box. The method further includes identifying a first cache tile associated with a render surface and determining that the coarse bounding box intersects the first cache tile. The method further includes comparing each bounding box included in the plurality of bounding boxes against the first cache tile to determine that a first set of one or more bounding boxes included in the plurality of bounding boxes intersects the first cache tile. Finally, the method includes, for each bounding box included in the first set of one or more bounding boxes, processing one or more graphics primitives associated with the bounding box. One advantage of the disclosed technique is that the number of intersection calculations performed for each cache tile is reduced.
Type:
Grant
Filed:
August 14, 2013
Date of Patent:
May 10, 2016
Assignee:
NVIDIA Corporation
Inventors:
Ziyad S. Hakura, Pierre Souillot, Cynthia Allison, Dale L. Kirkland
Abstract: A graphics server for remotely rendering a composite image and a method of use thereof. One embodiment of the graphics server includes: (1) a graphics renderer configured to render updates for a plurality of graphics windows within the composite image and (2) a display processing unit (DPU) configured to identify changed portions of the composite image and provide the changed portions to an encoder for encoding and subsequent transmission.
Type:
Grant
Filed:
July 18, 2013
Date of Patent:
May 10, 2016
Assignee:
Nvidia Corporation
Inventors:
Sarika Bhimkaran Khatod, David Stears, Rudi Bloks, Murralidharan Chilukoor
Abstract: A modem is disclosed. An embodiment thereof includes: a first interface arranged to connect to a network, a second interface arranged to connect to a host processor on the terminal, an audio interface arranged to connect to an audio processing means and a processing unit arranged to receive a plurality of parameters from the terminal via the second interface. The plurality of parameters are associated with a call established by the host processor to at least one further terminal connected to the network; wherein the processing unit is further arranged to receive input voice data from the audio processing means, process the input voice data in dependence on at least one of said parameters; and transmit the processed input voice data via the first interface to the at least one further terminal over said network during the call in dependence on a further at least one of said parameters.
Type:
Grant
Filed:
January 21, 2015
Date of Patent:
May 3, 2016
Assignee:
NVIDIA CORPORATION
Inventors:
Farouk Belghoul, Pete Cumming, Flavien Delorme, Fabien Besson, Bruno De Smet, Callum Cormack
Abstract: An apparatus for determining an electrical reliability of a ball grid array (BGA) assembly of an integrated circuit is presented. The assembly comprises a testing printed circuit board (PCB) having an integrated circuit (IC) test region located thereon. Vias extend through the testing PCB from a surface to an underside thereof within the IC test region. Each via has an IO pad or ground pad electrically connectable thereto. An IC package having an IC die connected thereto by solder bumps is connected to the IC test region by solder balls, such that each of the IO pads is electrically connectable to a respective pair of the solder balls and solder bumps by the vias. A method of testing interconnection reliability of the BGA using the apparatus is also presented.
Abstract: Computer system, method and computer program product for scheduling IPC activities are disclosed. In one embodiment, the computer system includes first processor and second processors that communicate with each other via IPC activities. The second processor may operate in a first mode in which the second processor is able to process IPC activities, or a second mode in which the second processor does not process IPC activities. Processing apparatus associated with the first processor identifies which of the pending IPC activities for communicating from the first processor to the second processor are not real-time sensitive, and schedules the identified IPC activities for communicating from the first processor to the second processor by delaying some of the identified IPC activities to thereby group them together. The grouped IPC activities are scheduled for communicating to the second processor during a period in which the second processor is continuously in the first mode.
Abstract: One embodiment of the present invention sets forth a technique for dynamically allocating memory using a nested hierarchical heap. A lock-free mechanism is used to access to a hierarchical heap data structure for allocating and deallocating memory from the heap. The heap is organized as a series of levels of fixed-size blocks, where all blocks at given level are the same size. At each lower level of the hierarchy, a collection of N blocks in the lower level equals the size of a single block at the level above. When a thread requests an allocation, one or more blocks at only one level are allocated to the thread. When threads are finished using an allocation, each thread deallocates the respective allocated blocks. When all of the blocks for a level have been deallocated, defragmentation is performed at that level.
Abstract: The input/output request packet (IRP) handling technique includes determining if a received input/output request packet should receive a given handling. If the input/output request packet should receive the given handling, the input/output request packet is dispatched to a device specific dispatch input/output request packet handler. Otherwise, the input/output request packet is redirected to an operating system dispatch input/output request packet handler.
Type:
Grant
Filed:
March 4, 2010
Date of Patent:
May 3, 2016
Assignee:
NVIDIA CORPORATION
Inventors:
Timothy Zhu, David Dunn, Randy Spurlock, Thomas Spacie
Abstract: Methods and systems monitor and log software and hardware failures (i.e. errors) over a communication network. In one embodiment, the method includes detecting an event caused by an error, and generating a log of the event in response to the detection. The method further includes generating a first message prompting if a user consents to allowing a third party provider track the error and transmitting the log to the third party provider over the communication network if the user consents to allowing the third party provider track the error. The method yet further includes generating a second message prompting if the user wants to provide additional information relating to the error. The method still further includes providing a user interface including an error reporting portal to the user if the user wants to provide additional information and transmitting the portal to the third party provider.
Abstract: A system and method for calibration of serial links using serial-to-parallel loopback. Embodiments of the present invention are operable for calibrating serial links using parallel links thereby reducing the number of links that need calibration. The method includes sending serialized data over a serial interface and receiving parallel data via a parallel interface. The serialized data is looped back via the parallel interface. The method further includes comparing the parallel data and the serialized data for a match thereof and calibrating the serial interface by adjusting the sending of the serialized data until the comparing detects the match. The adjusting of the sending is operable to calibrate the sending of the serialized data over the serial interface.
Abstract: A wireless communications device is disclosed herein. In one embodiment, the wireless communication device includes: a transceiver configured to facilitate communications with a radio access network; and a processing unit configured to: determine that a cell update message is to be transmitted to the network; determine if a security mode configuration procedure is in progress at the device; and if a security mode configuration procedure is not in progress, transmit a second type of cell update message to the network entity, the second type of cell update message not including an indicator indicating that the device has not had to abort an on-going security procedure and in place of said indicator including information not pertaining to a security procedure.
Abstract: A method and device for encoding and decoding video image data. An MPEG decoding and encoding process using data flow pipeline architecture implemented using complete dedicated logic is provided. A plurality of fixed-function data processors are interconnected with at least one pipelined data transmission line. At least one of the fixed-function processors performs a predefined encoding/decoding function upon receiving a set of predefined data from said transmission line. Stages of pipeline are synchronized on data without requiring a central traffic controller. This architecture provides better performance in smaller size, lower power consumption and better usage of memory bandwidth.
Abstract: A system, method, and computer program product are provided for altering a line of code. In use, a line of code is identified, where the line of code is written utilizing both a programming language and one or more syntax extensions to the programming language. Additionally, the line of code is altered so that the altered line of code is written using only the programming language. Further, the altered line of code is returned.
Abstract: A system and method for power management by performing clock-gating at a clock source. In the method a critical stall condition is detected within a clocked component of a core of a processing unit. The core includes one or more clocked components synchronized in operation by a clock signal distributed by a clock grid. The clock grid is clock-gated to suspend distribution of the clock signal to the core during the critical stall condition.
Abstract: A system and method are provided for representing pointers. An encoding type for a pointer structure referenced by a first cell of a data structure is determined. A first field of the pointer structure is encoded to indicate the encoding type. Further, a second field of the pointer structure is encoded according to the encoding type to indicate a location in memory where a cell structure corresponding to a second cell of the data structure is stored.
Abstract: Circuits, methods, and apparatus that provide multiple graphics processor systems where specific graphics processors can be instructed to not perform certain rendering operations while continuing to receive state updates, where the state updates are included in the rendering commands for these rendering operations. One embodiment provides commands instructing a graphics processor to start or stop rendering geometries. These commands can be directed to one or more specific processors by use of a set-subsystem device mask.
Abstract: A system, method, and computer program product are provided for managing miss requests. In use, a miss request is received at a unified miss handler from one of a plurality of distributed local caches. Additionally, the miss request is managed, utilizing the unified miss handler.
Type:
Grant
Filed:
August 14, 2012
Date of Patent:
April 26, 2016
Assignee:
NVIDIA Corporation
Inventors:
Brucek Kurdo Khailany, Ronny Meir Krashinsky, James David Balfour
Abstract: The present invention sets forth an apparatus for supporting multiple digital display interface standards. In one embodiment, the apparatus includes a graphics processing unit (GPU) configured to determine a display device type of a display device that is in connection with a digital display interconnect, receive a display device information associated with the display device, and output a first data signal to the display device. The display device is of display port (DP) digital display interface standard and the digital display interconnect is of digital visual interface (DVI) digital display interface standard. The apparatus further includes a removable adaptor circuitry between the display device and the digital display interconnect.
Abstract: One embodiment of the present invention sets forth a technique for performing a computer-implemented method that controls memory access operations. A stream of graphics commands includes at least one memory barrier command. Each memory barrier command in the stream of graphics command delays memory access operations scheduled for any command specified after the memory barrier command until all memory access operations scheduled for commands specified prior to the memory barrier command have completely executed.