Patents Assigned to NVidia
-
Publication number: 20140237189Abstract: One embodiment of the present invention sets forth a technique for increasing available storage space within compressed blocks of memory attached to data processing chips, without requiring a proportional increase in on-chip compression status bits. A compression status bit cache provides on-chip availability of compression status bits used to determine how many bits are needed to access a potentially compressed block of memory. A backing store residing in a reserved region of attached memory provides storage for a complete set of compression status bits used to represent compression status of an arbitrarily large number of blocks residing in attached memory. Physical address remapping (“swizzling”) used to distribute memory access patterns over a plurality of physical memory devices is partially replicated by the compression status bit cache to efficiently integrate allocation and access of the backing store data with other user data.Type: ApplicationFiled: January 16, 2014Publication date: August 21, 2014Applicant: NVIDIA CORPORATIONInventors: David B. GLASCO, Peter B. HOLMQVIST, George R. LYNCH, Patrick R. MARCHAND, Karan MEHRA, James ROBERTS
-
Publication number: 20140233616Abstract: A modem is disclosed. An embodiment thereof includes: a first interface arranged to connect to a network, a second interface arranged to connect to a host processor on the terminal, an audio interface arranged to connect to an audio processing means and a processing unit arranged to receive a plurality of parameters from the terminal via the second interface. The plurality of parameters are associated with a call established by the host processor to at least one further terminal connected to the network; wherein the processing unit is further arranged to receive input voice data from the audio processing means, process the input voice data in dependence on at least one of said parameters; and transmit the processed input voice data via the first interface to the at least one further terminal over said network during the call in dependence on a further at least one of said parameters.Type: ApplicationFiled: February 21, 2013Publication date: August 21, 2014Applicant: NVIDIA CORPORATIONInventors: Farouk Belghoul, Pete Cumming, Flavien Delorme, Fabien Besson, Bruno De Smet, Callum Cormack
-
Publication number: 20140232368Abstract: The disclosure is directed to a multi-phase electric power conversion device coupled between a power source and a load. The device includes a first regulator phase and a second regulator phase arranged in parallel, so that a first phase current and a second phase current are controllably provided in parallel to satisfy the current demand requirements of the load. Each phase current is based on current generated in an energy storage device within the respective phase. The regulator phases are asymmetric in that the energy storage device of the second regulator phase is configured so that its current can be varied more rapidly than the current in the energy storage device of the first regulator phase.Type: ApplicationFiled: February 19, 2013Publication date: August 21, 2014Applicant: NVIDIA CORPORATIONInventor: William James Dally
-
Publication number: 20140237187Abstract: A device driver calculates a tile size for a plurality of cache memories in a cache hierarchy. The device driver calculates a storage capacity of a first cache memory. The device driver calculates a first tile size based on the storage capacity of the first cache memory and one or more additional characteristics. The device driver calculates a storage capacity of a second cache memory. The device driver calculates a second tile size based on the storage capacity of the second cache memory and one or more additional characteristics, where the second tile size is different than the first tile size. The device driver transmits the second tile size to a second coalescing binning unit. One advantage of the disclosed techniques is that data locality and cache memory hit rates are improved where tile size is optimized for each cache level in the cache hierarchy.Type: ApplicationFiled: February 20, 2013Publication date: August 21, 2014Applicant: NVIDIA CORPORATIONInventors: Rouslan DIMITROV, Rui BASTOS, Ziyad S. HAKURA, Eric B. LUM
-
Publication number: 20140232664Abstract: Embodiments are disclosed for a touch-based device and methods for operation thereof. One embodiment provides a touch-based device having a display with a plurality of pixels and a touch input sensor overlying the display. The touch input sensor has a plurality of touch regions, each of which overlie an associated set of the pixels. The touch-based device further comprises a display controller configured to update the pixels according to a schema during which pixels are updated during update periods. The touch-based device yet further comprises a touch controller configured to recognize selectively applied touch inputs at the plurality of touch regions. The touch controller and the display controller are synchronized such that, for a given touch region, touch input recognition is modified while the display controller is updating the set of pixels associated with that touch region.Type: ApplicationFiled: February 20, 2013Publication date: August 21, 2014Applicant: NVIDIA CORPORATIONInventors: William Henry, Thomas Dean Skelton
-
Publication number: 20140232360Abstract: A system and method are provided for estimating current. A current source is configured to generate a current and a pulsed sense enable signal is generated. An estimate of the current is generated and the estimate of the current is updated based on a first signal that is configured to couple the current source to an electric power supply and a second signal that is configured to couple the current source to aloud. A system includes the current source and a current prediction unit. The current source is configured to generate a current. The current prediction unit is coupled the current source and is configured to generate the estimate of the current and update the estimate of the current based on the first signal and the second signal.Type: ApplicationFiled: February 19, 2013Publication date: August 21, 2014Applicant: NVIDIA CORPORATIONInventor: William J. Dally
-
Publication number: 20140232540Abstract: A system, method, and computer program product for implementing a power saving technique using a proximity sensor to control a state of a display is described. The method includes the step of monitoring a proximity sensor to determine whether a target object is within range of a device. If the target object is within range of the device, then the steps include deactivating a display of the device, or, if the target object is out of range of the device, then the steps include activating the display of the device.Type: ApplicationFiled: February 20, 2013Publication date: August 21, 2014Applicant: NVIDIA CORPORATIONInventor: Shanker S
-
Publication number: 20140237153Abstract: A method for sending readiness notification messages to a root complex in a peripheral component interconnect express (PCIe) subsystem. The method includes receiving a device-ready-status (DRS) message in a downstream port that is coupled to an upstream port in a PCIe component. The method further includes setting a bit in the downstream port indicating that the DRS message has been received.Type: ApplicationFiled: February 15, 2013Publication date: August 21, 2014Applicant: NVIDIA CORPORATIONInventors: Stephen David GLASER, Christian Edward RUNHAAR
-
Patent number: 8812892Abstract: One embodiment of the present invention sets forth a technique for performing high-performance clock training. One clock training sweep operation is performed to determine phase relationships for two write clocks with respect to a command clock. The phase relationships are generated to satisfy timing requirements for two different client devices, such as GDDR5 DRAM components. A second clock training sweep operation is performed to better align local clocks operating on the client devices. A voting tally is maintained during the second clock training sweep to record phase agreement at each step in the clock training sweep. The voting tally then determines whether one of the local clocks should be inverted to better align the two local clocks.Type: GrantFiled: December 30, 2009Date of Patent: August 19, 2014Assignee: NVIDIA CorporationInventors: Eric Lyell Hill, Russell R. Newcomb, Shu-Yi Yu
-
Patent number: 8813019Abstract: A method includes reading, through a processor of a computing device communicatively coupled to a memory, a design of an electronic circuit as part of verification thereof. The method also includes extracting, through the processor, a set of optimized instructions of a test algorithm involved in the verification such that the set of optimized instructions covers a maximum portion of logic functionalities associated with the design of the electronic circuit. Further, the method includes executing, through the processor, the test algorithm solely relevant to the optimized set of instructions to reduce a verification time of the design of the electronic circuit.Type: GrantFiled: April 30, 2013Date of Patent: August 19, 2014Assignee: NVIDIA CorporationInventors: Avinash Rath, Sanjith Sleeba, Ashish Kumar
-
Patent number: 8810584Abstract: A method includes automatically acquiring, through a resource manager module associated with a driver program executing on a node of a cluster computing system, information associated with utilization of a number of Graphics Processing Units (GPUs associated) with the node, and automatically calculating a window of time in which the node is predictably underutilized on a reoccurring and periodic basis. The method also includes automatically switching off, when one or more GPUs is in an idle state during the window of time, power to the one or more GPUs to transition the one or more GPUs into a quiescent state of zero power utilization thereof. Further, the method includes maintaining the one or more GPUs in the quiescent state until a processing requirement of the node necessitates utilization thereof at a rate higher than a predicted utilization rate of the node during the window of time.Type: GrantFiled: September 13, 2011Date of Patent: August 19, 2014Assignee: NVIDIA CorporationInventor: Bhavesh Narendra Kabawala
-
Patent number: 8810592Abstract: One embodiment of the present invention sets forth a technique for providing primitives and vertex attributes to the graphics pipeline. A primitive distribution unit constructs the batches of primitives and writes inline attributes and constants to a vertex attribute buffer (VAB) rather than passing the inline attributes directly to the graphics pipeline. A batch includes indices to attributes, where the attributes for each vertex are stored in a different VAB. The same VAB may be referenced by all of the vertices in a batch or different VABs may be referenced by different vertices in one or more batches. The batches are routed to the different processing engines in the graphics pipeline and each of the processing engines reads the VABs as needed to process the primitives. The number of parallel processing engines may be changed without changing the width or speed of the interconnect used to write the VABs.Type: GrantFiled: September 30, 2010Date of Patent: August 19, 2014Assignee: NVIDIA CorporationInventors: Ziyad S. Hakura, James C. Bowman, Jimmy Earl Chambers, Philip Browning Johnson, Philip Payman Shirvani
-
Publication number: 20140229754Abstract: Embodiments disclosed herein generally relate to the collection and correlation of power consumption data for mobile devices. Power consumption data for a mobile device is collected and correlated with system activity by monitoring what processes are being run on the CPU and measuring the power being consumed within the mobile device. The power being consumed within the mobile device is measured via a plurality of power monitors, such as sensors, disposed within the mobile device and buffered using an auxiliary microcontroller that resides separately from the CPU. Further, in some embodiments, temperature data also is measured via a temperature sensor.Type: ApplicationFiled: February 11, 2013Publication date: August 14, 2014Applicant: NVIDIA CORPORATIONInventors: Mark A. OVERBY, Ratin KUMAR
-
Publication number: 20140229935Abstract: A method includes loading a driver component on a hypervisor of a computing system including a Graphics Processing Unit (GPU) without hardware support for virtual interrupt delivery, and loading an instance of the driver component on each of a number of VMs consolidated on a computing platform of the computing system. The method also includes allocating a memory page associated with work completion by the each of the number of VMs thereto through a driver stack executing on the hypervisor, and sharing the memory page with the driver component executing on the hypervisor. Further, the method includes delivering, through the hypervisor, an interrupt from the GPU to an appropriate VM based on inspecting the memory page associated with the work completion by the each of the number of VMs.Type: ApplicationFiled: February 11, 2013Publication date: August 14, 2014Applicant: NVIDIA CorporationInventors: Surath Raj Mitra, Neo Jia, Kirti Wankhede
-
Publication number: 20140229783Abstract: A printed circuit board, an in-circuit test structure and a method for producing the in-circuit test structure thereof are disclosed. The in-circuit test structure comprises a via and a test pad. The via passes through the printed circuit board for communicating with an electrical device to be tested on the printed circuit board. The test pad is formed on an upper surface of the printed circuit board and covering the via, wherein a center of the via deviates from a center of the test pad. In the in-circuit test, the accuracy of the test data can be improved by means of the in-circuit test structure provided by the present invention, and thus the reliability of the test result is ensured. Also, the test efficiency of the in-circuit test is improved.Type: ApplicationFiled: February 7, 2014Publication date: August 14, 2014Applicant: NVIDIA CorporationInventors: Jinchai (Ivy) QIN, Bing AI
-
Publication number: 20140228077Abstract: A mobile computing device comprising a display panel and a home button, the display panel being disposed on the exterior front surface and the home button being disposed on the exterior back surface of the mobile computing device. The front surface may be free of any additional user Input/Output devices apart from the display panel. The mobile computing device may further comprise an accelerometer and a phone circuit. Upon detection that the display panel faces away from a user during a phone call, displayed information on the display panel may be concealed automatically. The accelerometer may be also operable to detect shaking motions on the mobile computing device as a user command to turn on or turn off the display panel.Type: ApplicationFiled: February 8, 2013Publication date: August 14, 2014Applicant: NVIDIA CorporationInventor: Shuang Xu
-
Publication number: 20140229953Abstract: A system, method, and computer program product for management of dynamic task-dependency graphs. The method includes the steps of generating a first task data structure in a memory for a first task, generating a second task data structure in the memory, storing a pointer to the second task data structure in a first output dependence field of the first task data structure, setting a reference counter field of the second task data structure to a threshold value that indicates a number of dependent events associated with the second task, and launching the second task when the reference counter field stores a particular value. The second task data structure is a placeholder for a second task that is dependent on the first task.Type: ApplicationFiled: February 13, 2013Publication date: August 14, 2014Applicant: NVIDIA CORPORATIONInventors: Igor Sevastiyanov, Brian Matthew Fahs, Nicholas Wang, Scott Ricketts, Luke David Durant, Brian Scott Pharris
-
Publication number: 20140225662Abstract: An approach is provided for a low-voltage, high-accuracy current mirror circuit. In one example, a current mirror circuit includes an input circuit configured to receive an input reference current. The input circuit includes a feedback channel for comparing and substantially matching the input reference current with an output current. The feedback channel is not configured for matching an input voltage with an output voltage. The input circuit does not include a comparator having an operational amplifier to compare the input reference current with the output current. The current mirror circuit also includes an output circuit coupled to the input circuit. The output circuit is configured to send the output current to one or more components of a circuit block.Type: ApplicationFiled: February 11, 2013Publication date: August 14, 2014Applicant: NVIDIA CORPORATIONInventor: Yoshinori NISHI
-
Publication number: 20140225579Abstract: A system and method are provided for regulating a voltage at a load. A current source is configured to provide a current to a voltage control mechanism and the voltage control mechanism is configured to provide a portion of the current to the load. The current is generated based on the portion of the current that is provided to the load. A system includes the current source, an upstream controller, and the voltage control mechanism that is coupled to the load. The upstream controller is coupled to the current source and is configured to control a current that is generated by the current source based on a portion of the current that is provided to the load.Type: ApplicationFiled: February 8, 2013Publication date: August 14, 2014Applicant: NVIDIA CORPORATIONInventor: William J. Dally
-
Publication number: 20140225902Abstract: An image pyramid processor and a method of multi-resolution image processing. One embodiment of the image pyramid processor includes: (1) a level multiplexer configured to employ a single processing element to process multiple levels of an image pyramid in a single work unit, and (2) a buffer pyramid having memory allocable to store respective intermediate results of the single work unit.Type: ApplicationFiled: February 11, 2013Publication date: August 14, 2014Applicant: NVIDIA CORPORATIONInventors: Qiuling Zhu, Navjot Garg, Yun-Ta Tsai, Kair Pulli, Albert Meixner