Patents Assigned to NVidia
-
Patent number: 8731694Abstract: In a class of embodiments, a method and apparatus for detecting freefall of a disk device (thereby predicting that the disk device will likely suffer imminent physical impact) and typically also preventing damage that a disk drive of the device would otherwise suffer if and when a predicted impact occurs. In some embodiments, a disk device includes a freefall detection processor and a CPU. The freefall detection processor is configured to monitor acceleration data to determine whether the disk device is in freefall and to perform at least one other operation (e.g., decoding of MP3-encoded audio data to generate decoded audio data) while the CPU performs at least one other task. Other embodiments pertain to a portable device including a digital audio processing subsystem and an accelerometer.Type: GrantFiled: June 8, 2010Date of Patent: May 20, 2014Assignee: Nvidia CorporationInventor: Ahmet Karakas
-
Patent number: 8732350Abstract: A system for improving direct memory access (DMA) offload. The system includes a processor, a data DMA engine and memory components. The processor selects an executable command comprising subcommands. The DDMA engine executes DMA operations related to a subcommand to perform memory transfer operations. The memory components store the plurality of subcommands and status data resulting from DMA operations. Each of the memory components has a corresponding token associated therewith. Possession of a token allocates its associated memory component to the processor or the DDMA engine possessing the token, making it inaccessible to the other. A first memory component and a second memory component of the plurality of memory components are used by the processor and the DDMA engine respectively and simultaneously. Tokens, e.g., the first and/or the second, are exchanged between the DDMA engine and the processor when the DDMA engine and/or the microcontroller complete accessing associated memory components.Type: GrantFiled: December 19, 2008Date of Patent: May 20, 2014Assignee: NVIDIA CorporationInventors: Dmitry Vyshetski, Howard Tsai, Paul J. Gyugyi
-
Patent number: 8731071Abstract: A system for performing finite input response filtering. The system includes an array of random access memories (RAMs) for storing at least one two-dimensional (2D) block of pixel data. The pixel data is stored such that one of each type of column or row from the 2D block of pixel data is stored per RAM. A control block provides address translation between the 2D block of pixel data and corresponding addresses in the array of RAMs. An input crossbar writes pixel data to the array of RAMs as directed by the control block. An output crossbar simultaneously reads pixel data from each of the array of RAMs and passes the data to an appropriate replicated data path, as directed by the control block. A single instruction multiple data path block includes a plurality of replicated data paths for simultaneously performing the FIR filtering, as directed by the control block.Type: GrantFiled: December 15, 2005Date of Patent: May 20, 2014Assignee: Nvidia CorporationInventor: Scott A. Kimura
-
Patent number: 8732496Abstract: A method and apparatus for supporting a self-refreshing display device coupled to a graphics controller are disclosed. A self-refreshing display device has a capability to drive the display based on video signals generated from a local frame buffer. A graphics controller coupled to the display device may optimally be placed in one or more power saving states when the display device is operating in a panel self-refresh mode. Data objects stored in a memory associated with the graphics controller may be aliased in another memory subsystem accessible to the operating system, graphical user interface, or applications executing in the system while the graphics controller is in a deep sleep state. The disclosed technique utilizes a virtual memory pointer, that may be updated in one or more virtual memory page tables to point to either the memory associated with the graphics controller or an alternate memory alias.Type: GrantFiled: March 24, 2011Date of Patent: May 20, 2014Assignee: NVIDIA CorporationInventor: David Wyatt
-
Patent number: 8732713Abstract: A parallel thread processor executes thread groups belonging to multiple cooperative thread arrays (CTAs). At each cycle of the parallel thread processor, an instruction scheduler selects a thread group to be issued for execution during a subsequent cycle. The instruction scheduler selects a thread group to issue for execution by (i) identifying a pool of available thread groups, (ii) identifying a CTA that has the greatest seniority value, and (iii) selecting the thread group that has the greatest credit value from within the CTA with the greatest seniority value.Type: GrantFiled: September 28, 2011Date of Patent: May 20, 2014Assignee: NVIDIA CorporationInventors: Brett W. Coon, John Erik Lindholm, Robert J. Stoll, Nicholas Wang, Jack Hilaire Choquette, Kathleen Elliott Nickolls
-
Patent number: 8732644Abstract: The present invention systems and methods enable configuration of functional components in integrated circuits. A present invention system and method utilizes micro electro-mechanical switches included in pathways of an integrated circuit to flexibly change the operational characteristics of functional components in an integrated circuit die based upon a variety of factors including power conservation, manufacturing defects, compatibility characteristics, performance requirements, and system health (e.g., the number of components operating properly). The micro electro-mechanical switches are selectively opened and closed to permit and prevent electrical current flow to and from functional components. Opening the micro electro-mechanical switches also enables power conservation by facilitating isolation of a component and minimization of impacts associated with leakage currents.Type: GrantFiled: September 15, 2004Date of Patent: May 20, 2014Assignee: Nvidia CorporationInventor: Michael B. Diamond
-
Patent number: 8731051Abstract: A video processor is described, which is useful for implementing a quantization process, in compliance with the H.264 standard. The video processor includes an input, for receiving a block of image data. The image data is loaded into an internal register. In response to receiving a SIMD instruction, a quantizer, which incorporates the quantization lookup tables associated with the H.264 standard in its associated hardware, makes necessary high-level quantization decisions. In response to receiving another SIMD instruction, the quantizer uses those high-level quantization decisions to retrieve specific values from the quantization lookup tables.Type: GrantFiled: January 23, 2007Date of Patent: May 20, 2014Assignee: Nvidia CorporationInventors: Pankaj Chaurasia, Shankar Moni
-
Patent number: 8732711Abstract: One embodiment of the present invention sets forth a technique for scheduling thread execution in a multi-threaded processing environment. A two-level scheduler maintains a small set of active threads called strands to hide function unit pipeline latency and local memory access latency. The strands are a sub-set of a larger set of pending threads that is also maintained by the two-leveler scheduler. Pending threads are promoted to strands and strands are demoted to pending threads based on latency characteristics. The two-level scheduler selects strands for execution based on strand state. The longer latency of the pending threads is hidden by selecting strands for execution. When the latency for a pending thread is expired, the pending thread may be promoted to a strand and begin (or resume) execution. When a strand encounters a latency event, the strand may be demoted to a pending thread while the latency is incurred.Type: GrantFiled: June 1, 2011Date of Patent: May 20, 2014Assignee: NVIDIA CorporationInventors: William James Dally, Stephen William Keckler, David Tarjan, John Erik Lindholm, Mark Alan Gebhart, Daniel Robert Johnson
-
Publication number: 20140131834Abstract: Embodiments of the invention generally relate to interposers for packaging integrated circuits. The interposers include capacitive devices for reducing signal noise and leakage between adjacent integrated circuits coupled to the interposers. The capacitive devices are formed from doped semiconductor layers. In one embodiment, an interposer includes a substrate having doped regions of opposing conductivities. First and second oxide layers are disposed over the doped regions. A first interconnect disposed in the second oxide layer is electrically coupled to a doped region of a first conductivity, and a second interconnect disposed in the second oxide is electrically coupled to a doped region of a second conductivity. Additional capacitive devices utilizing doped semiconductor layers are also disclosed.Type: ApplicationFiled: November 12, 2012Publication date: May 15, 2014Applicant: Nvidia CorporationInventor: ABRAHAM F. YEE
-
Publication number: 20140136211Abstract: A method for controlling a mobile information device based on verbal input from a user is presented. The method comprises waiting for a predetermined verbal input from a user. The method further comprises controlling a functional module of the mobile information device to determine a value within a predetermined range for a functional parameter in response to a first portion of the verbal input. Finally, the method comprises executing a functional operation by the functional module based on a determined value, in response to a second portion of the verbal input, wherein the second portion follows the first portion.Type: ApplicationFiled: March 20, 2013Publication date: May 15, 2014Applicant: NVIDIA CorporationInventors: Li-Ling CHOU, David HO
-
Publication number: 20140136778Abstract: A system, method, and computer program product are provided for implementing a storage array. In use, a storage array is implemented utilizing static random-access memory (SRAM). Additionally, the storage array is utilized in a multithreaded architecture.Type: ApplicationFiled: November 15, 2012Publication date: May 15, 2014Applicant: NVIDIA CORPORATIONInventors: Brucek Kurdo Khailany, James David Balfour, Ronny Meir Krashinsky
-
Publication number: 20140133083Abstract: The present invention discloses graphics card and base plate and core board for the graphics card. The graphics card includes a base plate and a core board, wherein the base plate includes a base plate PCB and a core board interface slot, a power module and a graphics output interface located on the base plate PCB; the core board includes a core board PCB and a base plate interface and a graphics processing module located on the core board PCB, and the core board is accommodated in the core board interface slot of the base plate and electrically connected with the base plate via the base plate interface; the graphics processing module receives a power signal from the power module and output graphic data for being displayed via the base plate interface; the graphics output interface is used to output the graphic data for being displayed.Type: ApplicationFiled: January 30, 2013Publication date: May 15, 2014Applicant: NVIDIA CORPORATIONInventor: Xi Wu
-
Publication number: 20140133105Abstract: Embodiments of the invention provide an IC system in which low-power chips can be positioned proximate high-power chips without suffering the effects of overheating. In one embodiment, the IC system may include a first substrate, a high-power chip embedded within the first substrate, a second substrate disposed on a first side of the first substrate, the first substrate and the second substrate are in electrical communication with each other, and a low-power chip disposed on the second substrate. In various embodiments, a heat distribution layer is disposed adjacent to the high-power chip such that the heat generated by the high-power chip can be effectively dissipated into an underlying printed circuit board attached to the first substrate, thereby preventing heat transfer from the high-power chip to the low-power chip. Therefore, the lifetime of the low-power chip is extended.Type: ApplicationFiled: November 9, 2012Publication date: May 15, 2014Applicant: NVIDIA CORPORATIONInventors: Abraham F. Yee, Jayprakash Chipalkatti, Shantanu Kalchuri
-
Publication number: 20140133689Abstract: The present invention provides a rear cover of a flat panel electronic device and a flat panel electronic device having the rear cover. A outer surface of the rear cover is provided with a recess and a supporting leg, the supporting leg is pivotably connected in the recess through a pivot means so as to enable the supporting leg to pivot between a retracting position and an unfolding position, the supporting leg is configured to be contained in the recess when being in the retracting position, and to make an angle with the rear cover when being in the unfolding position, a speaker is disposed in the supporting leg, sound holes are disposed at a position on a side wall of the supporting leg which is corresponding to the speaker, a through-hole is formed at a position of the rear cover which is corresponding to the pivot means, and an electric wire of the speaker passes through the through-hole to extend into the rear cover.Type: ApplicationFiled: January 22, 2013Publication date: May 15, 2014Applicant: NVIDIA CORPORATIONInventor: Mingyi YU
-
Publication number: 20140131847Abstract: Embodiments of the invention provides an IC system in which low-power chips can be positioned vertically proximate high-power chips without suffering the effects of overheating. In one embodiment, the IC system includes a first substrate, a high-power chip disposed on a first side of the first substrate, a thermal conductive pad disposed on a second side of the first substrate, one or more thermal conductive features formed in the first substrate, wherein the thermal conductive features thermally connect the high-power chip and the thermal conductive pad, and a heat sink attached to a surface of the thermal conductive pad, wherein the heat sink is in thermal communication with the thermal conductive pad. By having thermal conductive features formed through the first substrate to thermally connect the high-power chip and the thermal conductive pad, heat generated by the high-power chip can be effectively dissipated into the heat sink.Type: ApplicationFiled: November 9, 2012Publication date: May 15, 2014Applicant: NVIDIA CORPORATIONInventors: Abraham F. Yee, Jayprakash Chipalkatti, Shantanu Kalchuri
-
Publication number: 20140135067Abstract: A wireless base station device and a communication system including the wireless base station device are provided in the present invention, The wireless base station device comprises a baseband processing unit, wherein the baseband processing unit comprises a PCIE switch and a graphics processing unit for baseband processing which comprises a PCIE interface to interconnect with the PCIE switch. The wireless base station device and the communication system including the wireless base station device provided by the present invention have lower costs, better performances and shorter time-to-markets, and are easy to be upgraded.Type: ApplicationFiled: January 29, 2013Publication date: May 15, 2014Applicant: NVIDIA CORPORATIONInventor: Jiajia Liu
-
Publication number: 20140136793Abstract: A system and method are described for dynamically changing the size of a computer memory such as level 2 cache as used in a graphics processing unit. In an embodiment, a relatively large cache memory can be implemented in a computing system so as to meet the needs of memory intensive applications. But where cache utilization is reduced, the capacity of the cache can be reduced. In this way, power consumption is reduced by powering down a portion of the cache.Type: ApplicationFiled: November 13, 2012Publication date: May 15, 2014Applicant: NVIDIA CORPORATIONInventors: James Patrick Robertson, Oren Rubinstein, Michael A. Woodmansee, Don Bittel, Stephen D. Lew, Edward Riegelsberger, Brad W. Simeral, Gregory Alan Muthler, John Matthew Burgess
-
Publication number: 20140136862Abstract: The present invention provides a processor and a circuit board including the processor. The processor includes a data processing unit, and an external power supply component that is coupled to the data processing unit; wherein the data processing unit includes a power management unit that is integrated into the data processing unit, and the power management unit is used for performing power management for the data processing unit; and the power management unit further includes a pulse signal output terminal which is used for outputting a pulse-width modulation signal, and the pulse-width modulation signal controls the external power supply component to supply a stable operating voltage to the data processing unit. The present invention provides a processor with the improved performance, the improved stability and the simplified structure.Type: ApplicationFiled: March 8, 2013Publication date: May 15, 2014Applicant: NVIDIA CORPORATIONInventor: Yu ZHAO
-
Publication number: 20140132245Abstract: A method and a system are provided for clock phase detection. A set of delayed versions of a first clock signal is generated. The set of delayed versions of the first clock is used to sample a second clock signal, producing a sequence of samples in a domain corresponding to the first clock signal. At least one edge indication is located within the sequence of samples.Type: ApplicationFiled: November 13, 2012Publication date: May 15, 2014Applicant: NVIDIA CORPORATIONInventors: William J. Dally, Stephen G. Tell
-
Publication number: 20140132611Abstract: The present invention discloses a system and a method for data transmission. The system includes: a plurality of graphics processing units; a global shared memory for storing data transmitted among the plurality of graphics processing units; an arbitration circuit module, which is coupled to each of the plurality of graphics processing units and the global shared memory and configured to arbitrate an access request to the global shared memory from respective graphics processing units to avoid an access conflict among the plurality of graphics processing units. The system and the method for data transmission provided by the present invention enable respective GPUs in the system to transmit data through the global shared memory rather than a PCIE interface, thus saving data transmission bandwidth significantly and further improving a computing speed.Type: ApplicationFiled: January 30, 2013Publication date: May 15, 2014Applicant: NVIDIA CORPORATIONInventors: SHIFU CHEN, YANBING SHAO, JIHUA YU, WENZHI LIU, WENBO JI