Patents by Inventor Franck R. Diard

Franck R. Diard has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10157492
    Abstract: One embodiment of the present invention sets forth a method for pre-computing Z-values using an IGPU and, subsequently, conveying these Z-values to a DGPU. The graphics driver partitions the display into rectangular M-by-N tiles of pixels. For each tile, the graphics driver generates a quad geometry that encompasses the corresponding pixels. For each image frame, the graphics driver configures the IGPU to generate and down-sample a Z-buffer, creating a coarse Z-texture that contains a Z-value for each tile. The graphics driver transfers the coarse Z-texture to the system memory and configures the DGPU to apply the coarse Z-texture to the quad geometries, thereby generating a coarse Z-buffer in which the M-by-N pixels included in each tile are assigned the Z-value for the particular tile. Among other things, this technique enables the IGPU to pre-compute Z-values for the DGPU without straining the system memory bandwidth or defeating the Z-buffer compression techniques used by the DGPU.
    Type: Grant
    Filed: October 2, 2008
    Date of Patent: December 18, 2018
    Assignee: NVIDIA CORPORATION
    Inventor: Franck R. Diard
  • Patent number: 10118095
    Abstract: One embodiment of the invention sets forth a method that includes receiving a request from a client device to launch an application program for execution on a server device, where the application program is configured to operate in a full-screen display mode, and, in response, creating an execution environment for the application program that disables the full-screen display mode. Within the execution environment, the application program is configured to generate the rendered image data for display on the client device. With the disclosed approach, application programs that are configured to execute on an application server computer system in a full-screen display mode can be executed through an execution environment that includes a shim layer configured to disable the full-screen display mode and transmit the render image data to a client device for display.
    Type: Grant
    Filed: December 14, 2012
    Date of Patent: November 6, 2018
    Assignee: NVIDIA CORPORATION
    Inventor: Franck R. Diard
  • Patent number: 10116943
    Abstract: One embodiment of the present invention sets forth a technique for adaptively compressing video frames. The technique includes encoding a first plurality of video frames based on a first video compression algorithm to generate first encoded video frames and transmitting the first encoded video frames to a client device. The technique further includes receiving a user input event, switching from the first video compression algorithm to a second video compression algorithm in response to the user input event, encoding a second plurality of video frames based on the second video compression algorithm to generate second encoded video frames, and transmitting the second encoded video frames to the client device.
    Type: Grant
    Filed: October 16, 2013
    Date of Patent: October 30, 2018
    Assignee: NVIDIA CORPORATION
    Inventor: Franck R. Diard
  • Patent number: 9727521
    Abstract: Techniques are disclosed for peer-to-peer data transfers where a source device receives a request to read data words from a target device. The source device creates a first and second read command for reading a first portion and a second portion of a plurality of data words from the target device, respectively. The source device transmits the first read command to the target device, and, before a first read operation associated with the first read command is complete, transmits the second read command to the target device. The first and second portions of the plurality of data words are stored in a first and second portion a buffer memory, respectively. Advantageously, an arbitrary number of multiple read operations may be in progress at a given time without using multiple peer-to-peer memory buffers. Performance for large data block transfers is improved without consuming peer-to-peer memory buffers needed by other peer GPUs.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: August 8, 2017
    Assignee: NVIDIA Corporation
    Inventors: Dennis K. Ma, Karan Gupta, Lei Tian, Franck R. Diard, Praveen Jain, Wei-Je Huang, Atul Kalambur
  • Patent number: 9536275
    Abstract: A system and method uses the capabilities of a geometry shader unit within the multi-threaded graphics processor to implement algorithms with variable input and output.
    Type: Grant
    Filed: November 2, 2007
    Date of Patent: January 3, 2017
    Assignee: Nvidia Corporation
    Inventor: Franck R. Diard
  • Patent number: 9489767
    Abstract: One embodiment of the present invention sets forth a technique to perform fine-grained rendering predication using an IGPU and a DGPU. A graphics driver divides a 3D object into batches of triangles. The IGPU processes each batch of triangles through a modified rendering pipeline to determine if the batch is culled. The IGPU writes bits into a bitstream corresponding to the visibility of the batches. The DGPU reads bits from the bitstream and performs full-blown rendering, including shading, but only on the batches of triangles whose bit indicates that the batch is visible. Advantageously, this approach to rendering predication provides fine-grained culling without adding unnecessary overhead, thereby optimizing both hardware resources and performance.
    Type: Grant
    Filed: December 13, 2007
    Date of Patent: November 8, 2016
    Assignee: NVIDIA Corporation
    Inventors: Cass W. Everitt, Franck R. Diard
  • Patent number: 9324174
    Abstract: Circuits, methods, and apparatus that provide multiple graphics processor systems where specific graphics processors can be instructed to not perform certain rendering operations while continuing to receive state updates, where the state updates are included in the rendering commands for these rendering operations. One embodiment provides commands instructing a graphics processor to start or stop rendering geometries. These commands can be directed to one or more specific processors by use of a set-subsystem device mask.
    Type: Grant
    Filed: November 24, 2009
    Date of Patent: April 26, 2016
    Assignee: NVIDIA Corporation
    Inventor: Franck R. Diard
  • Patent number: 9159156
    Abstract: One embodiment of the present invention sets forth a technique to perform fine-grained rendering predication using an IGPU. A graphics driver divides a 3D object into batches of triangles. The IGPU processes each batch of triangles through a modified rendering pipeline to determine if the batch is culled. The IGPU writes bits into a bitstream corresponding to the visibility of the batches. Advantageously, this approach to rendering predication provides fine-grained culling without adding unnecessary overhead, thereby optimizing both hardware resources and performance.
    Type: Grant
    Filed: May 14, 2012
    Date of Patent: October 13, 2015
    Assignee: NVIDIA Corporation
    Inventors: Cass W. Everitt, Franck R. Diard
  • Publication number: 20150103880
    Abstract: One embodiment of the present invention sets forth a technique for adaptively compressing video frames. The technique includes encoding a first plurality of video frames based on a first video compression algorithm to generate first encoded video frames and transmitting the first encoded video frames to a client device. The technique further includes receiving a user input event, switching from the first video compression algorithm to a second video compression algorithm in response to the user input event, encoding a second plurality of video frames based on the second video compression algorithm to generate second encoded video frames, and transmitting the second encoded video frames to the client device.
    Type: Application
    Filed: October 16, 2013
    Publication date: April 16, 2015
    Applicant: NVIDIA CORPORATION
    Inventor: Franck R. DIARD
  • Patent number: 9001134
    Abstract: Method, apparatuses, and systems are presented for processing a sequence of images for display using a display device involving operating a plurality of graphics devices, including at least one first graphics device that processes certain ones of the sequence of images, including a first image, and at least one second graphics device that processes certain other ones of the sequence of images, including a second image, delaying processing of the second image by the at least one second graphics device, by a specified duration, relative to processing of the first image by the at least one first graphics device, to stagger pixel data output for the first image and pixel data output for the second image, and selectively providing output from the at least one first graphics device and the at least one second graphics device to the display device.
    Type: Grant
    Filed: April 3, 2009
    Date of Patent: April 7, 2015
    Assignee: NVIDIA Corporation
    Inventors: Franck R. Diard, Wayne Douglas Young, Philip Browning Johnson
  • Patent number: 8938127
    Abstract: Rendered image data is encoded by a server computing device and transmitted to a remote client device that executes an interactive application program. The client device decodes and displays the image data and, when the user interacts with the application program, the client device provides input control signals to the server computing device. When input control signals are received by the server, the latency incurred for encoding and/or decoding the image data is reduced. Therefore, the user does not experience inconsistencies in the frame rate of images displayed on the client when the user interacts with the application program. The reduction in latency is achieved by dynamically switching from a hardware implemented encoding technique to a software implemented encoding technique. Latency may also be reduced by dynamically switching from a hardware implemented decoding technique to a software implemented decoding technique.
    Type: Grant
    Filed: September 18, 2012
    Date of Patent: January 20, 2015
    Assignee: NVIDIA Corporation
    Inventor: Franck R. Diard
  • Patent number: 8854380
    Abstract: One embodiment of the present invention sets forth a technique for displaying high-resolution images using multiple graphics processing units (GPUs). The graphics driver is configured to present one virtual display device, simulating a high-resolution mosaic display surface, to the operating system and the application programs. The graphics driver is also configured to partition the display surface amongst the GPUs and transmit commands and data to the local memory associated with the first GPU. A video bridge automatically broadcasts this information to the local memories associated with the remaining GPUs. Each GPU renders and displays only the partition of the display surface assigned to that particular GPU, and the GPUs are synchronized to ensure the continuity of the displayed images. This technique allows the system to display higher resolution images than the system hardware would otherwise support, transparently to the operating system and the application programs.
    Type: Grant
    Filed: September 13, 2013
    Date of Patent: October 7, 2014
    Assignee: NVIDIA Corporation
    Inventors: Franck R. Diard, Ian M. Williams, Eric Boucher
  • Patent number: 8773445
    Abstract: One embodiment of the present invention sets forth a method, which includes the steps of generating a first rendered image associated with a first application, independently generating a second rendered image associated with a second application, applying a first set of blending weights to the first rendered image to establish a first weighted image, applying a second set of blending weights to the second rendered image to establish a second weighted image, and blending the first weighted image and the second weighted image before scanning out a blended result to a first display device.
    Type: Grant
    Filed: March 30, 2012
    Date of Patent: July 8, 2014
    Assignee: NVIDIA Corporation
    Inventor: Franck R. Diard
  • Publication number: 20140171190
    Abstract: One embodiment of the invention sets forth a method that includes receiving a request from a client device to launch an application program for execution on a server device, where the application program is configured to operate in a full-screen display mode, and, in response, creating an execution environment for the application program that disables the full-screen display mode. Within the execution environment, the application program is configured to generate the rendered image data for display on the client device. With the disclosed approach, application programs that are configured to execute on an application server computer system in a full-screen display mode can be executed through an execution environment that includes a shim layer configured to disable the full-screen display mode and transmit the render image data to a client device for display.
    Type: Application
    Filed: December 14, 2012
    Publication date: June 19, 2014
    Applicant: NVIDIA CORPORATION
    Inventor: Franck R. DIARD
  • Patent number: 8736617
    Abstract: A method of displaying graphics data is described. The method involves accessing the graphics data in a memory subsystem associated with one graphics subsystem. The graphics data is transmitted to a second graphics subsystem, where it is displayed on a monitor coupled to the second graphics subsystem.
    Type: Grant
    Filed: August 4, 2008
    Date of Patent: May 27, 2014
    Assignee: Nvidia Corporation
    Inventors: Stephen Lew, Bruce R. Intihar, Abraham B. de Waal, David G. Reed, Tony Tamasi, David Wyatt, Franck R. Diard, Brad Simeral
  • Publication number: 20140078148
    Abstract: One embodiment of the present invention sets forth a technique for displaying high-resolution images using multiple graphics processing units (GPUs). The graphics driver is configured to present one virtual display device, simulating a high-resolution mosaic display surface, to the operating system and the application programs. The graphics driver is also configured to partition the display surface amongst the GPUs and transmit commands and data to the local memory associated with the first GPU. A video bridge automatically broadcasts this information to the local memories associated with the remaining GPUs. Each GPU renders and displays only the partition of the display surface assigned to that particular GPU, and the GPUs are synchronized to ensure the continuity of the displayed images. This technique allows the system to display higher resolution images than the system hardware would otherwise support, transparently to the operating system and the application programs.
    Type: Application
    Filed: September 13, 2013
    Publication date: March 20, 2014
    Applicant: NVIDIA Corporation
    Inventors: Franck R. DIARD, Ian M. WILLIAMS, Eric BOUCHER
  • Publication number: 20140079327
    Abstract: Rendered image data is encoded by a server computing device and transmitted to a remote client device that executes an interactive application program. The client device decodes and displays the image data and, when the user interacts with the application program, the client device provides input control signals to the server computing device. When input control signals are received by the server, the latency incurred for encoding and/or decoding the image data is reduced. Therefore, the user does not experience inconsistencies in the frame rate of images displayed on the client when the user interacts with the application program. The reduction in latency is achieved by dynamically switching from a hardware implemented encoding technique to a software implemented encoding technique. Latency may also be reduced by dynamically switching from a hardware implemented decoding technique to a software implemented decoding technique.
    Type: Application
    Filed: September 18, 2012
    Publication date: March 20, 2014
    Applicant: NVIDIA CORPORATION
    Inventor: Franck R. DIARD
  • Publication number: 20140082120
    Abstract: Techniques are disclosed for peer-to-peer data transfers where a source device receives a request to read data words from a target device. The source device creates a first and second read command for reading a first portion and a second portion of a plurality of data words from the target device, respectively. The source device transmits the first read command to the target device, and, before a first read operation associated with the first read command is complete, transmits the second read command to the target device. The first and second portions of the plurality of data words are stored in a first and second portion a buffer memory, respectively. Advantageously, an arbitrary number of multiple read operations may be in progress at a given time without using multiple peer-to-peer memory buffers. Performance for large data block transfers is improved without consuming peer-to-peer memory buffers needed by other peer GPUs.
    Type: Application
    Filed: September 14, 2012
    Publication date: March 20, 2014
    Inventors: Dennis K. MA, Karan Gupta, Lei Tian, Franck R. Diard, Praveen Jain, Wei-Je Huang, Atul Kalambur
  • Publication number: 20130311548
    Abstract: User inputs are received from end user devices. The user inputs are associated with applications executing in parallel on a computer system. Responsive to the user inputs, data is generated using a graphics processing unit (GPU) configured as multiple virtual GPUs that are concurrently utilized by the applications. The data is then directed to the proper end user devices for display.
    Type: Application
    Filed: December 26, 2012
    Publication date: November 21, 2013
    Applicant: NVIDIA CORPORATION
    Inventors: Jen-Hsun Huang, Franck R. Diard, Andrew Currid
  • Publication number: 20130300754
    Abstract: One embodiment of the present invention sets forth a technique to perform fine-grained rendering predication using an IGPU. A graphics driver divides a 3D object into batches of triangles. The IGPU processes each batch of triangles through a modified rendering pipeline to determine if the batch is culled. The IGPU writes bits into a bitstream corresponding to the visibility of the batches. Advantageously, this approach to rendering predication provides fine-grained culling without adding unnecessary overhead, thereby optimizing both hardware resources and performance.
    Type: Application
    Filed: May 14, 2012
    Publication date: November 14, 2013
    Inventors: Cass W. EVERITT, Franck R. DIARD