Patents by Inventor Gernot Schaufler

Gernot Schaufler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8786618
    Abstract: One embodiment of the present invention sets forth a technique for configuring a graphics processing pipeline (GPP) to process data according to one or more shader programs. The method includes receiving a plurality of pointers, where each pointer references a different shader program header (SPH) included in a plurality of SPHs, and each SPH is associated with a different shader program that executes within the GPP. For each SPH included in the plurality of SPHs, one or more GPP configuration parameters included in the SPH are identified, and the GPP is adjusted based on the one or more GPP configuration parameters.
    Type: Grant
    Filed: October 6, 2010
    Date of Patent: July 22, 2014
    Assignee: NVIDIA Corporation
    Inventors: Jerome F. Duluk, Jr., Jesse David Hall, Patrick R. Brown, Gernot Schaufler, Mark D. Stadler
  • Patent number: 8760460
    Abstract: One embodiment of the present invention sets forth a technique for using a shared memory to store hardware-managed virtual buffers. A circular buffer is allocated within a general-purpose multi-use cache for storage of primitive attribute data rather than having a dedicated buffer for the storage of the primitive attribute data. The general-purpose multi-use cache is also configured to store other graphics data sinces the space requirement for primitive attribute data storage is highly variable, depending on the number of attributes and the size of primitives. Entries in the circular buffer are allocated as needed and released and invalidated after the primitive attribute data has been consumed. An address to the circular buffer entry is transmitted along with primitive descriptors from object-space processing to the distributed processing in screen-space.
    Type: Grant
    Filed: May 4, 2010
    Date of Patent: June 24, 2014
    Assignee: NVIDIA Corporation
    Inventors: Emmett M. Kilgariff, Steven E. Molnar, Sean J. Treichler, Johnny S. Rhoades, Gernot Schaufler, Dale L. Kirkland, Cynthia Ann Edgeworth Allison, Karl M. Wurstner, Timothy John Purcell
  • Patent number: 8619087
    Abstract: One embodiment of the present invention sets forth a technique for reducing the amount of memory required to store vertex data processed within a processing pipeline that includes a plurality of shading engines. The method includes determining a first active shading engine and a second active shading engine included within the processing pipeline, wherein the second active shading engine receives vertex data output by the first active shading engine. An output map is received and indicates one or more attributes that are included in the vertex data and output by the first active shading engine. An input map is received and indicates one or more attributes that are included in the vertex data and received by the second active shading engine from the first active shading engine.
    Type: Grant
    Filed: September 30, 2010
    Date of Patent: December 31, 2013
    Assignee: Nvidia Corporation
    Inventors: Jerome F. Duluk, Jr., Gernot Schaufler
  • Publication number: 20130132711
    Abstract: One embodiment of the present invention sets forth a technique instruction level and compute thread array granularity execution preemption. Preempting at the instruction level does not require any draining of the processing pipeline. No new instructions are issued and the context state is unloaded from the processing pipeline. When preemption is performed at a compute thread array boundary, the amount of context state to be stored is reduced because execution units within the processing pipeline complete execution of in-flight instructions and become idle. If, the amount of time needed to complete execution of the in-flight instructions exceeds a threshold, then the preemption may dynamically change to be performed at the instruction level instead of at compute thread array granularity.
    Type: Application
    Filed: November 22, 2011
    Publication date: May 23, 2013
    Inventors: Lacky V. SHAH, Gregory Scott Palmer, Gernot Schaufler, Samuel H. Duncan, Philip Browning Johnson, Shirish Gadre, Timothy John Purcell
  • Publication number: 20130124838
    Abstract: One embodiment of the present invention sets forth a technique instruction level and compute thread array granularity execution preemption. Preempting at the instruction level does not require any draining of the processing pipeline. No new instructions are issued and the context state is unloaded from the processing pipeline. When preemption is performed at a compute thread array boundary, the amount of context state to be stored is reduced because execution units within the processing pipeline complete execution of in-flight instructions and become idle. If, the amount of time needed to complete execution of the in-flight instructions exceeds a threshold, then the preemption may dynamically change to be performed at the instruction level instead of at compute thread array granularity.
    Type: Application
    Filed: November 10, 2011
    Publication date: May 16, 2013
    Inventors: Lacky V. SHAH, Gregory Scott Palmer, Gernot Schaufler, Samuel H. Duncan, Philip Browning Johnson, Shirish Gadre, Robert Ohannessian, Nicholas Wang, Christopher Lamb, Philip Alexander Cuadra, Timothy John Purcell
  • Publication number: 20110084976
    Abstract: One embodiment of the present invention sets forth a technique for configuring a graphics processing pipeline (GPP) to process data according to one or more shader programs. The method includes receiving a plurality of pointers, where each pointer references a different shader program header (SPH) included in a plurality of SPHs, and each SPH is associated with a different shader program that executes within the GPP. For each SPH included in the plurality of SPHs, one or more GPP configuration parameters included in the SPH are identified, and the GPP is adjusted based on the one or more GPP configuration parameters.
    Type: Application
    Filed: October 6, 2010
    Publication date: April 14, 2011
    Inventors: Jerome F. Duluk, JR., Jesse David Hall, Patrick R. Brown, Gernot Schaufler, Mark D. Stadler
  • Publication number: 20110080415
    Abstract: One embodiment of the present invention sets forth a technique for reducing the amount of memory required to store vertex data processed within a processing pipeline that includes a plurality of shading engines. The method includes determining a first active shading engine and a second active shading engine included within the processing pipeline, wherein the second active shading engine receives vertex data output by the first active shading engine. An output map is received and indicates one or more attributes that are included in the vertex data and output by the first active shading engine. An input map is received and indicates one or more attributes that are included in the vertex data and received by the second active shading engine from the first active shading engine.
    Type: Application
    Filed: September 30, 2010
    Publication date: April 7, 2011
    Inventors: Jerome F. DULUK, JR., Gernot Schaufler