Abstract: One aspect provides a method of operating a modem at a terminal. The modem is arranged to store one or more message identifier. Each of the one or more message identifier identifies a type of message that the modem is arranged to act upon when received on a broadcast channel from a communications network. The method comprises detecting a country that the terminal is located in. The method further comprises determining if the detected country is a country in which a public warning system is implemented. The method further comprises determining if the one or more message identifier includes only public warning message identifiers. The method further comprises disabling monitoring of the broadcast channel if the detected country is not a country in which a public warning system is implemented and the one or more message identifier includes only public warning message identifiers.
Abstract: One embodiment of the present invention sets forth a technique for distributing graphics commands and atomic commands to a color processing unit (CROP) in an efficient manner. The interleaving mechanism determines, at each clock cycle, which graphics command(s) or atomic command(s) is transmitted to the CROP based on different factors. First, the interleaving mechanism ensures that atomic commands or graphics commands associated with a multi-transaction command stream are processed together. Second, the interleaving mechanism selects consecutive graphics commands for transmission to the CROP that optimize the use of different memory caches. Third, the interleaving mechanism prioritizes atomic commands over graphics commands. At each clock cycle, the graphics command(s) or the atomic command(s) selected by the interleaving mechanism are transmitted to the CROP for processing.
Type:
Grant
Filed:
December 17, 2009
Date of Patent:
May 30, 2017
Assignee:
NVIDIA Corporation
Inventors:
Chad D. Walker, Rui M. Bastos, Narayan Kulshrestha
Abstract: One aspect provides a method of handling a proactive indication received from a subscriber identity module at a modem, the modem being connected to a terminal equipment via a command interface. The method comprises receiving, at a modem processor, the proactive indication from the subscriber identity module. The method further comprises determining that the indication is to be handled by the modem processor. The method further comprises a modem processor transmitting a display command via the command interface to the terminal equipment and the modem processor awaiting a user response command, and continuing or aborting an action indicated in the proactive indication depending on the user response in the user response command received from the terminal equipment.
Abstract: Decoder techniques in accordance with embodiment of the present technology include partially decoding a compressed file on a serial based processing unit to find offsets of each of a plurality of entropy data blocks. The compressed file and offset for each of the plurality of entropy encoded data blocks are transferred to a parallel based processing unit. Thereafter, the compressed file is at least partially decoded on the parallel based processing unit using the offset for each of the plurality of entropy encoded data blocks.
Type:
Application
Filed:
November 20, 2015
Publication date:
May 25, 2017
Applicant:
NVIDIA CORPORATION
Inventors:
Michal Krasnoborski, Michael Clair Houston, Michael Denis O'Connor, Steven Gregory Parker
Abstract: A system, method, and computer program product are provided for producing a cavity bottom package of a package-on-package structure. The method includes the steps of receiving a bottom package comprising a substrate material having a top layer including a first set of pads configured to be electrically coupled to a second set of pads of an integrated circuit die. A layer of non-conductive material is applied to the top layer of the bottom package and a cavity is formed in the layer of non-conductive material to expose the first set of pads, where the cavity is configured to contain the integrated circuit die oriented such that the second set of pads face the first set of pads.
Type:
Grant
Filed:
January 23, 2014
Date of Patent:
May 23, 2017
Assignee:
NVIDIA Corporation
Inventors:
Ronilo V. Boja, Teckgyu Kang, Abraham Fong Yee
Abstract: A system and method are provided for controlling a radio frequency (RF) power amplifier. A magnitude input and a phase input are received for transmission of a RF signal by the RF power amplifier. A digital pulse, having a center position relative to an edge of a reference clock based on the phase input and having a width based on the magnitude input, is generated. The digital pulse is filtered with a resonant matching network to produce the RF signal corresponding to the magnitude input and the phase input.
Abstract: A processing unit includes multiple execution pipelines, each of which is coupled to a first input section for receiving input data for pixel processing and a second input section for receiving input data for vertex processing and to a first output section for storing processed pixel data and a second output section for storing processed vertex data. The processed vertex data is rasterized and scan converted into pixel data that is used as the input data for pixel processing. The processed pixel data is output to a raster analyzer.
Type:
Grant
Filed:
March 25, 2013
Date of Patent:
May 23, 2017
Assignee:
NVIDIA CORPORATION
Inventors:
John Erik Lindholm, Brett W. Coon, Stuart F. Oberman, Ming Y. Siu, Matthew P. Gerlach
Abstract: One embodiment of the present invention includes a method for updating timing parameters after a circuit design change. The method includes, prior to the circuit design change, deriving a value for a first timing parameter based on a signoff timing analysis of a timing arc, and a value for a second timing parameter based on a quick timing analysis of the timing arc; and obtaining a first transition time based on the quick timing analysis. The method further includes, after the circuit design change, deriving a value for a third timing parameter based on the quick timing analysis, obtaining a second transition time based on the quick timing analysis, and deriving a fourth value for a fourth parameter based on the quick timing analysis, wherein the fourth parameter is based on the first, second, and third parameters and on the first and second transition times.
Abstract: A system, method, and computer program product are provided for passing attribute structures between shader stages of a processing pipeline. The method includes the steps of receiving data represented at a first level by a processing pipeline including an upstream shader unit, a downstream shader unit, and a processing unit. The upstream shader unit processes the data to generate a first set of attributes corresponding to the data represented at a second level. The upstream shader unit also stores the first set of the attributes in a first portion of a memory system that can be read by the downstream shader unit and any shader units that are downstream in the processing pipeline relative to the upstream shader unit. In one embodiment, the processing unit is coupled between the upstream shader unit and the downstream shader unit.
Type:
Grant
Filed:
August 23, 2013
Date of Patent:
May 23, 2017
Assignee:
NVIDIA Corporation
Inventors:
Ziyad Sami Hakura, Henry Packard Moreton, Emmett M. Kilgariff
Abstract: Techniques for degrading rendering performance to extend operating time of a computing platform includes determining a source and a level of power for the computing platform during receipt of the graphics data and rendering of the graphics data. Graphics data is rendered using settings received from the application if the computing platform is not operating from a limited power supply. The graphics data is rendered using one or more sets of graphics processing power conservation rendering settings if the computing platform is operating from a limited power supply and the remaining energy capacity of the limited power supply is less than one or more predetermined levels.
Abstract: One embodiment of the present invention sets forth a technique for instruction level execution preemption. Preempting at the instruction level does not require any draining of the processing pipeline. No new instructions are issued and the context state is unloaded from the processing pipeline. Any in-flight instructions that follow the preemption command in the processing pipeline are captured and stored in a processing task buffer to be reissued when the preempted program is resumed. The processing task buffer is designated as a high priority task to ensure the preempted instructions are reissued before any new instructions for the preempted context when execution of the preempted context is restored.
Type:
Grant
Filed:
November 8, 2011
Date of Patent:
May 16, 2017
Assignee:
NVIDIA Corporation
Inventors:
Philip Alexander Cuadra, Christopher Lamb, Lacky V. Shah
Abstract: A graphics processing subsystem and a method of shading are provided. In one embodiment, the subsystem includes: (1) a memory configured to contain a texel data structure according to which multiple primitive texels corresponding to a particular composite texel are contained in a single page of the memory and (2) a graphics processing unit configured to communicate with the memory via a data bus and execute a shader to fetch the multiple primitive texels contained in the single page to create the particular composite texel.
Abstract: A power-gating array configured to power gate a logic block includes multiple zones of sleep field-effect transistors (FETs). A zone controller coupled to the power-gating array selectively enables a certain number of zones within the array depending on the voltage drawn by the logic block. When the logic block draws a lower voltage, the zone controller enables a lower number of zones. When the logic block draws a higher voltage, the zone controller enables a greater number of zones. One advantage of the disclosed technique is that sleep FET usage is reduced, thereby countering the effects of FET deterioration due to BTI and TDDB. Accordingly, the lifetime of sleep FETs configured to perform power gating for logic blocks may be extended.
Abstract: A device compiler and linker is configured to group instructions into different strands for execution by different threads based on the dependence of those instructions on other, long-latency instructions. A thread may execute a strand that includes long-latency instructions, and then hardware resources previously allocated for the execution of that thread may be de-allocated from the thread and re-allocated to another thread. The other thread may then execute another strand while the long-latency instructions are in flight. With this approach, the other thread is not required to wait for the long-latency instructions to complete before acquiring hardware resources and initiating execution of the other strand, thereby eliminating at least a portion of the time that the other thread would otherwise spend waiting.
Type:
Grant
Filed:
August 7, 2013
Date of Patent:
May 9, 2017
Assignee:
NVIDIA Corporation
Inventors:
Mojtaba Mehrara, Michael Garland, Gregory Diamos
Abstract: In a processor, a method for speculative permission acquisition for access to a shared memory. The method includes receiving a store from a processor core to modify a shared cache line, and in response to receiving the store, marking the cache line as speculative. The cache line is then modified in accordance with the store. Upon receiving a modification permission, the modified cache line is subsequently committed.
Type:
Grant
Filed:
September 14, 2012
Date of Patent:
May 9, 2017
Assignee:
NVIDIA CORPORATION
Inventors:
James Van Zoeren, Alexander Klaiber, Guillermo J. Rozas, Paul Serris
Abstract: One embodiment sets forth a technique for time-multiplexed communication for transmitting command and address information between a controller and a multi-port memory device over a single connection. Command and address information for each port of the multi-port memory device is time-multiplexed within the controller to produce a single stream of commands and addresses for different memory requests. The single stream of commands and addresses is transmitted by the controller to the multi-port memory device where the single stream is demultiplexed to generate separate streams of commands and addresses for each port of the multi-port memory device.
Abstract: A circuit for multiplying a digital signal by a variable gain, controlled in dependence on a digital gain control value. The circuit comprises: a multiplier input for receiving the digital signal; a multiplier output for outputting the digital signal multiplied by the gain; a plurality of multiplier stages each arranged to multiply by a respective predetermined multiplication factor; and switching circuitry arranged so as to apply selected ones of the multiplier stages in a multiplication path between the input and output, in dependence on the digital gain control value. The multiplication factors are arranged such that binary steps in the digital gain control value result in logarithmic steps in said gain.
Abstract: Attributes of access requests can be used to distinguish one set of access requests from another set of access requests. The prefetcher can determine a pattern for each set of access requests and then prefetch cache lines accordingly. In an embodiment in which there are multiple caches, a prefetcher can determine a destination for prefetched cache lines associated with a respective set of access requests. For example, the prefetcher can prefetch one set of cache lines into one cache, and another set of cache lines into another cache. Also, the prefetcher can determine a prefetch distance for each set of access requests. For example, the prefetch distances for the sets of access requests can be different.
Abstract: A digital phase-and-frequency controller. In one embodiment, the controller includes: (1) a first segment accumulator operable to accumulate errors while an accumulation-selection signal has a first value and (2) a second segment accumulator operable to accumulate errors while said accumulation-selection signal has a second value, and (3) circuitry operable to produce the control signal using the errors accumulated in the first segment accumulator while a use-selection signal has a first value and the errors accumulated in the second segment accumulator while the use-selection signal has a second value.
Abstract: One embodiment of the present invention sets forth a technique for managing buffer table entries in a tile-based architecture. The technique includes binding a plurality of shader registers to a buffer table entry. The technique further includes processing at least one tile by reading a buffer table index stored in the shader register to access the buffer table entry, reading a buffer address stored in the buffer table entry, accessing data associated with the buffer address, and unbinding the shader register from the buffer table entry. The technique further includes determining that none of the shader registers is still bound to the buffer table entry and, in response, causing a release packet to be inserted into an instruction stream. The technique further includes determining that a last tile has been processed and, in response, transmitting the release packet to cause the buffer table entry to be released.
Type:
Grant
Filed:
October 3, 2013
Date of Patent:
May 2, 2017
Assignee:
NVIDIA CORPORATION
Inventors:
Karim M. Abdalla, Ziyad S. Hakura, Cynthia Ann Edgeworth Allison, Dale L. Kirkland