Patents Assigned to NVidia
-
Patent number: 8990280Abstract: In some embodiments, a data processing system including an operation unit including circuitry configurable to perform any selected one of a number of operations on data (e.g., audio data) and a configuration unit configured to assert configuration information to configure the operation unit to perform the selected operation. When the operation includes matrix multiplication of a data vector and a matrix whose coefficients exhibit symmetry, the configuration information preferably includes bits that determine signs of all but magnitudes of only a subset of the coefficients. When the operation includes successive addition and subtraction operations on operand pairs, the configuration information preferably includes bits that configure the operation unit to operate in an alternating addition/subtraction mode to perform successive addition and subtraction operations on each pair of data values of a sequence of data value pairs.Type: GrantFiled: November 14, 2006Date of Patent: March 24, 2015Assignee: Nvidia CorporationInventors: Partha Sriram, Robert Quan, Bhagawan Reddy Gnanapa, Ahmet Karakas
-
Publication number: 20150082001Abstract: One embodiment of the present invention includes techniques to support demand paging across a processing unit. Before a host unit transmits a command to an engine that does not tolerate page faults, the host unit ensures that the virtual memory addresses associated with the command are appropriately mapped to physical memory addresses. In particular, if the virtual memory addresses are not appropriately mapped, then the processing unit performs actions to map the virtual memory address to appropriate locations in physical memory. Further, the processing unit ensures that the access permissions required for successful execution of the command are established. Because the virtual memory address mappings associated with the command are valid when the engine receives the command, the engine does not encounter page faults upon executing the command. Consequently, in contrast to prior-art techniques, the engine supports demand paging regardless of whether the engine is involved in remedying page faults.Type: ApplicationFiled: September 13, 2013Publication date: March 19, 2015Applicant: NVIDIA CORPORATIONInventors: Samuel H. DUNCAN, Jerome F. DULUK, JR., Jonathon Stuart Ramsay EVANS, James Leroy DEMING
-
Publication number: 20150077918Abstract: Stiffening is provided for an electronic package assembly having a substrate. A first electronic package, having a first function, is electromechanically fastened to a first surface of the substrate with a first array of electrically conductive interconnects, which is disposed over a central area of the substrate first surface. A second electronic package, having a second function, is fastened to the first substrate surface with a second conductive interconnect array. At least a pair of the first array conductors is electrically coupled to at least a pair of the second array conductors for data/signal exchange and at least a component of the first electronic package interacts with at least a component of the second package. A metallic stiffener ring is disposed about an outer periphery of at least the central area of the substrate.Type: ApplicationFiled: September 19, 2013Publication date: March 19, 2015Applicant: Nvidia CorporationInventors: Leilei Zhang, Ron Boja, Abraham Yee, Zuhair Bokharey
-
Publication number: 20150082444Abstract: A method of detecting an error in a security mode configuration procedure conducted at a radio access network is provided. A cell update message is transmitted which causes the radio access network to abort a security mode configuration procedure. After the transmission of an update message, a new security mode configuration is received and the original security mode configuration is replaced with a new security mode configuration. A security mode configuration check is performed on a received downlink message using the new security mode configuration. If the security mode configuration check fails, a further security mode configuration check is performed on the downlink message to detect an error in the security mode configuration procedure. If it is determined there has been an error in the security mode configuration procedure, security mode configuration checks are performed on further downlink messages received from the network using the original security mode configuration.Type: ApplicationFiled: September 13, 2013Publication date: March 19, 2015Applicant: NVIDIA CorporationInventors: Tim Rogers, Olivier Jean
-
Publication number: 20150077420Abstract: A graphics processing system includes a central processing unit that processes a cubic Bezier curve corresponding to a filled cubic Bezier path. Additionally, the graphics processing system includes a cubic preprocessor coupled to the central processing unit that formats the cubic Bezier curve to provide a formatted cubic Bezier curve having quadrilateral control points corresponding to a mathematically simple cubic curve. The graphics processing system further includes a graphics processing unit coupled to the cubic preprocessor that employs the formatted cubic Bezier curve in rendering the filled cubic Bezier path. A rendering unit and a display cubic Bezier path filling method are also provided.Type: ApplicationFiled: September 16, 2013Publication date: March 19, 2015Applicant: Nvidia CorporationInventor: Jeffrey A. Bolz
-
Publication number: 20150081175Abstract: A vehicle user preference system and a method of applying user preferences. One embodiment of the vehicle user preference system includes: (1) a memory configured to store a user preference data structure, according to which user preferences are stored, (2) a Bluetooth communication interface operable to gain access to a device ID profile (DIP) identifying a mobile device communicably coupled thereto and associated with the user preference data structure, and (3) a processor communicably coupled to the memory and the Bluetooth communication interface, and configured to employ the DIP in gaining access to the user preference data structure, and cause the user preferences to be applied to vehicle subsystems.Type: ApplicationFiled: September 18, 2013Publication date: March 19, 2015Applicant: Nvidia CorporationInventor: Andrew Fear
-
Publication number: 20150081866Abstract: A special-purpose processing system, a method of carrying out sharing special-purpose processing resources and a graphics processing system. In one embodiment, the special-purpose processing system includes: (1) a special-purpose processing resource and (2) a Representational State Transfer (ReST) application programming interface operable to process data using the special-purpose processing resource in response to stateless commands based on a standard protocol selected from the group consisting of: (2a) a standard network protocol and (2b) a standard database query protocol.Type: ApplicationFiled: January 7, 2014Publication date: March 19, 2015Applicant: Nvidia CorporationInventors: Jonathan Cohen, Michael Houston, Frank Jargstorff, Eric Young, Roy Kim
-
Publication number: 20150082074Abstract: A transmitter is configured to scale up a low bandwidth delivered by a first processing element to match a higher bandwidth associated with an interconnect. A receiver is configured to scale down the high bandwidth delivered by the interconnect to match the lower bandwidth associated with a second processing element. The first processing element and the second processing element may thus communicate with one another across the interconnect via the transmitter and the receiver, respectively, despite the bandwidth mismatch between those processing elements and the interconnect.Type: ApplicationFiled: September 19, 2013Publication date: March 19, 2015Applicant: NVIDIA CORPORATIONInventors: Marvin A. DENMAN, Dennis K. MA, Stephen David GLASER
-
Publication number: 20150081761Abstract: A method includes executing an instance of a process on a data processing device and another data processing device, and setting up a Personal Area Network (PAN) through registering or pairing the another data processing device with the data processing device based on an identifier thereof. The method also includes initiating transfer of a multimedia file from the data processing device to the another data processing device through the instance of the process executing on the data processing device, and transmitting, metadata associated with the multimedia file from the data processing device to the another data processing device. Further, the method includes determining format compatibility of the multimedia file with the another data processing device thereat based on the metadata and a list of supported formats available in the another data processing device through a continued execution of the instance of the process on the another data processing device.Type: ApplicationFiled: September 17, 2013Publication date: March 19, 2015Applicant: NVIDIA CorporationInventors: Shounak Santosh Deshpande, Rahul Ulhas Marathe
-
Publication number: 20150082075Abstract: A transmitter is configured to scale up a low bandwidth delivered by a first processing element to match a higher bandwidth associated with an interconnect. A receiver is configured to scale down the high bandwidth delivered by the interconnect to match the lower bandwidth associated with a second processing element. The first processing element and the second processing element may thus communicate with one another across the interconnect via the transmitter and the receiver, respectively, despite the bandwidth mismatch between those processing elements and the interconnect.Type: ApplicationFiled: September 19, 2013Publication date: March 19, 2015Applicant: NVIDIA CORPORATIONInventors: Marvin A. DENMAN, Dennis K. MA, Stephen David GLASER
-
Publication number: 20150079948Abstract: A modem for use at a terminal, the modem comprising: a first interface arranged to connect to a communications network; a second interface arranged to connect to a host processor on the terminal; and a processing unit, the processing unit configured to: detect that a call is to be established over the communications network; in response to said detection, perform a call setup procedure; determine if the call setup procedure has been successful or has failed due to failure of a security procedure; and in response to determining that the call setup procedure has failed due to failure of a security procedure, repeat said call setup procedure without indicating failure of the call setup procedure to a user of said terminal.Type: ApplicationFiled: September 13, 2013Publication date: March 19, 2015Applicant: Nvidia CorporationInventors: Alexander May-Weymann, Timothy Rogers, Susan Iversen, Stephen Molloy
-
Publication number: 20150081937Abstract: Systems and devices configured to implement techniques for ensuring the completion of transactions while minimizing latency and power consumption are described. A device may be operably coupled to a bidirectional communications bus. A bidirectional communications bus may include a clock line and a data line. The device may be configured to determine if an initiated transaction corresponds to a device in a low power state. The device may pause the transaction. The device may replay portions of the transaction when the device is in an appropriate power state. The device may replay portions of the transaction using an override interface.Type: ApplicationFiled: September 18, 2013Publication date: March 19, 2015Applicant: NVIDIA CorporationInventors: Kevin WONG, Craig ROSS, Thomas DEWEY
-
Publication number: 20150081753Abstract: One embodiment of the present invention includes a method for performing arithmetic operations on arbitrary width integers using fixed width elements. The method includes receiving a plurality of input operands, segmenting each input operand into multiple sectors, performing a plurality of multiply-add operations based on the multiple sectors to generate a plurality of multiply-add operation results, and combining the multiply-add operation results to generate a final result. One advantage of the disclosed embodiments is that, by using a common fused floating point multiply-add unit to perform arithmetic operations on integers of arbitrary width, the method avoids the area and power penalty of having additional dedicated integer units.Type: ApplicationFiled: September 13, 2013Publication date: March 19, 2015Applicant: NVIDIA CORPORATIONInventors: Srinivasan (Vasu) IYER, Michael Alan FETTERMAN, David Conrad TANNENBAUM
-
Patent number: 8982140Abstract: One embodiment of the present invention sets forth a technique for addressing data in a hierarchical graphics processing unit cluster. A hierarchical address is constructed based on the location of a storage circuit where a target unit of data resides. The hierarchical address comprises a level field indicating a hierarchical level for the unit of data and a node identifier that indicates which GPU within the GPU cluster currently stores the unit of data. The hierarchical address may further comprise one or more identifiers that indicate which storage circuit in a particular hierarchical level currently stores the unit of data. The hierarchical address is constructed and interpreted based on the level field. The technique advantageously enables programs executing within the GPU cluster to efficiently access data residing in other GPUs using the hierarchical address.Type: GrantFiled: September 23, 2011Date of Patent: March 17, 2015Assignee: NVIDIA CorporationInventor: William James Dally
-
Patent number: 8984183Abstract: One embodiment of the present invention sets forth a technique for enabling the insertion of generated tasks into a scheduling pipeline of a multiple processor system allows a compute task that is being executed to dynamically generate a dynamic task and notify a scheduling unit of the multiple processor system without intervention by a CPU. A reflected notification signal is generated in response to a write request when data for the dynamic task is written to a queue. Additional reflected notification signals are generated for other events that occur during execution of a compute task, e.g., to invalidate cache entries storing data for the compute task and to enable scheduling of another compute task.Type: GrantFiled: December 16, 2011Date of Patent: March 17, 2015Assignee: Nvidia CorporationInventors: Timothy John Purcell, Lacky V. Shah, Jerome F. Duluk, Jr., Sean J. Treichler, Karim M. Abdalla, Philip Alexander Cuadra, Brian Pharris
-
Patent number: 8983223Abstract: A method includes implementing, through a processor communicatively coupled to a memory and/or a hardware block, a Bilateral Filter (BF) including a spatial filter component and a range filter component, and implementing the spatial filter component with a low-complexity function to allow for focus on the range filter component. The method also includes determining, through the processor, filter tap value(s) related to the range filter component as a function of radiometric distance between a pixel of a video frame and/or an image and other pixels thereof based on a pre-computed corpus of data related to execution of an application in accordance with a filtering requirement of the pixel by the application. Further, the method includes constraining, through the processor, the filter tap value(s) to a form i×base based on the BF implementation. i is an integer and base is a floating point base.Type: GrantFiled: July 23, 2013Date of Patent: March 17, 2015Assignee: NVIDIA CorporationInventors: Niranjan Avadhanam, Prashant Sohani
-
Patent number: 8984167Abstract: A client computing device transmits commands and/or data to a software application executing on a server computing device. The server computing device includes one or more graphics processing units (GPUs) that render frames of graphic data associated with the software application. For each frame, the one or more GPUs copy the frame to memory. A server engine also executing on the server computing device divides the frame into subframes, compresses each subframe, and transmits compressed subframes to the client computing device. The client computing device decompresses and reassembles the frame for display to an end-user of the client computing device.Type: GrantFiled: December 10, 2009Date of Patent: March 17, 2015Assignee: NVIDIA CorporationInventor: Franck Diard
-
Patent number: 8984498Abstract: One embodiment of the present invention sets forth a technique for translating application programs written using a parallel programming model for execution on multi-core graphics processing unit (GPU) for execution by general purpose central processing unit (CPU). Portions of the application program that rely on specific features of the multi-core GPU are converted by a translator for execution by a general purpose CPU. The application program is partitioned into regions of synchronization independent instructions. The instructions are classified as convergent or divergent and divergent memory references that are shared between regions are replicated. Thread loops are inserted to ensure correct sharing of memory between various threads during execution by the general purpose CPU.Type: GrantFiled: March 31, 2009Date of Patent: March 17, 2015Assignee: Nvidia CorporationInventors: Vinod Grover, Bastiaan Joannes Matheus Aarts, Michael Murphy
-
Patent number: 8984484Abstract: A method includes continuously capturing, through an application executing on a data processing device, images of a desktop of the data processing device as a background process as part of a testing session on the data processing device in an active mode thereof. The method also includes encoding, through a processor of the data processing device, the captured images of the desktop as a video sequence, and providing a capability to a user of the data processing device and/or another data processing device to detect a fault event related to the testing session based on access to the encoded video sequence.Type: GrantFiled: November 27, 2012Date of Patent: March 17, 2015Assignee: NVIDIA CorporationInventor: Shounak Santosh Deshpande
-
Patent number: 8984486Abstract: A method, system, and computer-program product are provided for automatically performing stability testing on device firmware. The method includes the steps of copying a binary file corresponding to a version of a firmware to one or more nodes that each include a testbench, causing the one or more nodes to perform tests utilizing the version of the firmware, and determining whether a new build of the firmware is available. If the new build is available, then the steps include copying a second binary file corresponding to the new build to the one or more nodes and causing the one or more nodes to perform the tests utilizing the new build. However, if the new build is not available, then the steps include then causing the one or more nodes to perform one or more further iterations of the tests utilizing the version of the firmware.Type: GrantFiled: July 12, 2013Date of Patent: March 17, 2015Assignee: NVIDIA CorporationInventor: Shiva Prasad Nayak