Patents by Inventor Jerome F. Duluk

Jerome F. Duluk has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9575892
    Abstract: One embodiment of the present invention is a parallel processing unit (PPU) that includes one or more streaming multiprocessors (SMs) and implements a replay unit per SM. Upon detecting a page fault associated with a memory transaction issued by a particular SM, the corresponding replay unit causes the SM, but not any unaffected SMs, to cease issuing new memory transactions. The replay unit then stores the faulting memory transaction and any faulting in-flight memory transaction in a replay buffer. As page faults are resolved, the replay unit replays the memory transactions in the replay buffer—removing successful memory transactions from the replay buffer—until all of the stored memory transactions have successfully executed. Advantageously, the overall performance of the PPU is improved compared to conventional PPUs that, upon detecting a page fault, stop performing memory transactions across all SMs included in the PPU until the fault is resolved.
    Type: Grant
    Filed: December 17, 2013
    Date of Patent: February 21, 2017
    Assignee: NVIDIA Corporation
    Inventors: James Leroy Deming, Jerome F. Duluk, Jr., John Mashey, Mark Hairgrove, Lucien Dunning, Jonathon Stuart Ramsey Evans, Samuel H. Duncan, Cameron Buschardt, Brian Fahs
  • Publication number: 20160357482
    Abstract: One embodiment of the present invention sets forth a computer-implemented method for migrating a memory page from a first memory to a second memory. The method includes determining a first page size supported by the first memory. The method also includes determining a second page size supported by the second memory. The method further includes determining a use history of the memory page based on an entry in a page state directory associated with the memory page. The method also includes migrating the memory page between the first memory and the second memory based on the first page size, the second page size, and the use history.
    Type: Application
    Filed: August 22, 2016
    Publication date: December 8, 2016
    Inventors: Jerome F. DULUK, JR., Cameron BUSCHARDT, James Leroy DEMING, Lucien DUNNING, Brian FAHS, Mark HAIRGROVE, Chenghuan JIA, John MASHEY, James M. VAN DYKE
  • Patent number: 9513975
    Abstract: One embodiment of the present invention sets forth a technique for performing nested kernel execution within a parallel processing subsystem. The technique involves enabling a parent thread to launch a nested child grid on the parallel processing subsystem, and enabling the parent thread to perform a thread synchronization barrier on the child grid for proper execution semantics between the parent thread and the child grid. This technique advantageously enables the parallel processing subsystem to perform a richer set of programming constructs, such as conditionally executed and nested operations and externally defined library functions without the additional complexity of CPU involvement.
    Type: Grant
    Filed: May 2, 2012
    Date of Patent: December 6, 2016
    Assignee: NVIDIA Corporation
    Inventors: Stephen Jones, Philip Alexander Cuadra, Daniel Elliot Wexler, Ignacio Llamas, Lacky V. Shah, Jerome F. Duluk, Jr., Christopher Lamb
  • Patent number: 9507638
    Abstract: One embodiment of the present invention sets forth a technique for managing the allocation and release of resources during multi-threaded program execution. Programmable reference counters are initialized to values that limit the amount of resources for allocation to tasks that share the same reference counter. Resource parameters are specified for each task to define the amount of resources allocated for consumption by each array of execution threads that is launched to execute the task. The resource parameters also specify the behavior of the array for acquiring and releasing resources. Finally, during execution of each thread in the array, an exit instruction may be configured to override the release of the resources that were allocated to the array. The resources may then be retained for use by a child task that is generated during execution of a thread.
    Type: Grant
    Filed: November 8, 2011
    Date of Patent: November 29, 2016
    Assignee: NVIDIA Corporation
    Inventors: Philip Alexander Cuadra, Karim M. Abdalla, Jerome F. Duluk, Jr., Luke Durant, Gerald F. Luiz, Timothy John Purcell, Lacky V. Shah
  • Patent number: 9495721
    Abstract: Techniques for dispatching pixel information in a graphics processing pipeline. A fragment processing unit generates a pixel that includes multiple samples based on a first portion of a graphics primitive received by a first thread. The fragment processing unit calculates a first value for the first pixel, where the first value is calculated only once for the pixel. The fragment processing unit calculates a first set of values for the samples, where each value in the first set of values corresponds to a different sample and is calculated only once for the corresponding sample. The fragment processing unit combines the first value with each value in the first set of values to create a second set of values. The fragment processing unit creates one or more dispatch messages to store the second set of values in a set of output registers.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: November 15, 2016
    Assignee: NVIDIA Corporation
    Inventors: Jerome F. Duluk, Jr., Rouslan Dimitrov, Eric Lum, Rui Bastos
  • Patent number: 9442759
    Abstract: A time slice group (TSG) is a grouping of different streams of work (referred to herein as “channels”) that share the same context information. The set of channels belonging to a TSG are processed in a pre-determined order. However, when a channel stalls while processing, the next channel with independent work can be switched to fully load the parallel processing unit. Importantly, because each channel in the TSG shares the same context information, a context switch operation is not needed when the processing of a particular channel in the TSG stops and the processing of a next channel in the TSG begins. Therefore, multiple independent streams of work are allowed to run concurrently within a single context increasing utilization of parallel processing units.
    Type: Grant
    Filed: December 9, 2011
    Date of Patent: September 13, 2016
    Assignee: NVIDIA Corporation
    Inventors: Samuel H. Duncan, Lacky V. Shah, Sean J. Treichler, Daniel Elliot Wexler, Jerome F. Duluk, Jr., Philip Browning Johnson, Jonathon Stuart Ramsay Evans
  • Patent number: 9430400
    Abstract: One embodiment of the present invention sets forth a computer-implemented method for altering migration rules for a unified virtual memory system. The method includes detecting that a migration rule trigger has been satisfied. The method also includes identifying a migration rule action that is associated with the migration rule trigger. The method further includes executing the migration rule action. Other embodiments of the present invention include a computer-readable medium, a computing device, and a unified virtual memory subsystem. One advantage of the disclosed approach is that various settings of the unified virtual memory system may be modified during program execution. This ability to alter the settings allows for an application to vary the manner in which memory pages are migrated and otherwise manipulated, which provides the application the ability to optimize the unified virtual memory system for efficient execution.
    Type: Grant
    Filed: December 17, 2013
    Date of Patent: August 30, 2016
    Assignee: NVIDIA Corporation
    Inventor: Jerome F. Duluk, Jr.
  • Patent number: 9424201
    Abstract: One embodiment of the present invention sets forth a computer-implemented method for migrating a memory page from a first memory to a second memory. The method includes determining a first page size supported by the first memory. The method also includes determining a second page size supported by the second memory. The method further includes determining a use history of the memory page based on an entry in a page state directory associated with the memory page. The method also includes migrating the memory page between the first memory and the second memory based on the first page size, the second page size, and the use history.
    Type: Grant
    Filed: December 19, 2013
    Date of Patent: August 23, 2016
    Assignee: NVIDIA Corporation
    Inventors: Jerome F. Duluk, Jr., Cameron Buschardt, James Leroy Deming, Lucien Dunning, Brian Fahs, Mark Hairgrove, Chenghuan Jia, John Mashey, James M. Van Dyke
  • Patent number: 9418616
    Abstract: A graphics processing unit includes a set of geometry processing units each configured to process graphics primitives in parallel with one another. A given geometry processing unit generates one or more graphics primitives or geometry objects and buffers the associated vertex data locally. The geometry processing unit also buffers different sets of indices to those vertices, where each such set represents a different graphics primitive or geometry object. The geometry processing units may then stream the buffered vertices and indices to global buffers in parallel with one another. A stream output synchronization unit coordinates the parallel streaming of vertices and indices by providing each geometry processing unit with a different base address within a global vertex buffer where vertices may be written. The stream output synchronization unit also provides each geometry processing unit with a different base address within a global index buffer where indices may be written.
    Type: Grant
    Filed: December 20, 2012
    Date of Patent: August 16, 2016
    Assignee: NVIDIA CORPORATION
    Inventors: Jerome F. Duluk, Jr., Ziyad S. Hakura, Henry Packard Moreton
  • Patent number: 9396515
    Abstract: One embodiment sets forth a method for transforming 3-D images into 2-D rendered images using render target sample masks. A software application creates multiple render targets associated with a surface. For each render target, the software application also creates an associated render target sample mask configured to select one or more samples included in each pixel. Within the graphics pipeline, a pixel shader processes each pixel individually and outputs multiple render target-specific color values. For each render target, a ROP unit uses the associated render target sample mask to select covered samples included in the pixel. Subsequently, the ROP unit uses the render target-specific color value to update the selected samples in the render target, thereby achieving sample-level color granularity.
    Type: Grant
    Filed: August 16, 2013
    Date of Patent: July 19, 2016
    Assignee: NVIDIA CORPORATION
    Inventors: Eric B. Lum, Jerome F. Duluk, Jr., Yury Y. Uralsky, Rouslan Dimitrov, Rui M. Bastos
  • Patent number: 9378139
    Abstract: A system, method, and computer program product for low-latency scheduling and launch of memory defined tasks. The method includes the steps of receiving a task metadata data structure to be stored in a memory associated with a processor, transmitting the task metadata data structure to a scheduling unit of the processor, storing the task metadata data structure in a cache unit included in the scheduling unit, and copying the task metadata data structure from the cache unit to the memory.
    Type: Grant
    Filed: May 8, 2013
    Date of Patent: June 28, 2016
    Assignee: NVIDIA Corporation
    Inventors: Scott Ricketts, Brian Scott Pharris, Nicholas Wang, Luke David Durant, Philip Alexander Cuadra, Jerome F. Duluk, Jr.
  • Patent number: 9355041
    Abstract: One embodiment of the present invention is a memory subsystem that includes a sliding window tracker that tracks memory accesses associated with a sliding window of memory page groups. When the sliding window tracker detects an access operation associated with a memory page group within the sliding window, the sliding window tracker sets a reference bit that is associated with the memory page group and is included in a reference vector that represents accesses to the memory page groups within the sliding window. Based on the values of the reference bits, the sliding window tracker causes the selection a memory page in a memory page group that has fallen into disuse from a first memory to a second memory. Because the sliding window tracker tunes the memory pages that are resident in the first memory to reflect memory access patterns, the overall performance of the memory subsystem is improved.
    Type: Grant
    Filed: December 12, 2013
    Date of Patent: May 31, 2016
    Assignee: NVIDIA Corporation
    Inventors: John Mashey, Cameron Buschardt, James Leroy Deming, Jerome F. Duluk, Jr., Brian Fahs
  • Patent number: 9355430
    Abstract: One embodiment sets forth a method for allocating memory to surfaces. A software application specifies surface data, including interleaving state data. Based on the interleaving state data, a surface access unit bloats addressees derived from discrete coordinates associated with the surface, creating a bloated virtual address space with a predictable pattern of addresses that do not correspond to data. Advantageously, by creating predictable regions of addresses that do not correspond to data, the software application program may configure the surface to share physical memory space with one or more other surfaces. In particular, the software application may map the virtual address space together with one or more virtual address spaces corresponding to complementary data patterns to the same physical base address. And, by overlapping the virtual address spaces onto the same pages in physical address space, the physical memory may be more densely packed than by using prior-art allocation techniques.
    Type: Grant
    Filed: September 20, 2013
    Date of Patent: May 31, 2016
    Assignee: NVIDIA Corporation
    Inventors: Eric B. Lum, Cass W. Everitt, Henry Packard Moreton, Yury Y. Uralsky, Cyril Crassin, Jerome F. Duluk, Jr.
  • Patent number: 9311097
    Abstract: A graphics processing system configured to track per-tile event counts in a tile-based architecture. A tiling unit in the graphics processing system is configured to cause a screen-space pipeline to load a count value associated with a first cache tile into a count memory and to cause the screen-space pipeline to process a first set of primitives that intersect the first cache tile. The tiling unit is further configured to cause the screen-space pipeline to store a second count value in a report memory location. The tiling unit is also configured to cause the screen-space pipeline to process a second set of primitives that intersect the first cache tile and to cause the screen-space pipeline to store a third count value in the first accumulating memory. Conditional rendering operations may be performed on a per-cache tile basis, based on the per-tile event count.
    Type: Grant
    Filed: October 23, 2013
    Date of Patent: April 12, 2016
    Assignee: NVIDIA Corporation
    Inventors: Ziyad S. Hakura, Jerome F. Duluk, Jr.
  • Patent number: 9293109
    Abstract: A graphics processing unit includes a set of geometry processing units each configured to process graphics primitives in parallel with one another. A given geometry processing unit generates one or more graphics primitives or geometry objects and buffers the associated vertex data locally. The geometry processing unit also buffers different sets of indices to those vertices, where each such set represents a different graphics primitive or geometry object. The geometry processing units may then stream the buffered vertices and indices to global buffers in parallel with one another. A stream output synchronization unit coordinates the parallel streaming of vertices and indices by providing each geometry processing unit with a different base address within a global vertex buffer where vertices may be written. The stream output synchronization unit also provides each geometry processing unit with a different base address within a global index buffer where indices may be written.
    Type: Grant
    Filed: December 20, 2012
    Date of Patent: March 22, 2016
    Assignee: NVIDIA Corporation
    Inventors: Jerome F. Duluk, Jr., Ziyad S. Hakura, Henry Packard Moreton
  • Patent number: 9275491
    Abstract: One embodiment of the present invention sets forth a method for generating work to be processed by a graphics pipeline residing within a graphics processor. The method includes the steps of receiving an indication that a first graphics workload is to be submitted to a command queue associated with the graphics processor, allocating a first portion of shader accessible memory for one or more units of state information that are necessary for processing the first graphics workload, populating the first portion of shader accessible memory with the one or more units of state information, and transmitting to the command queue of the graphics processor the one or more units of state information stored within the first portion of shader accessible memory, wherein the first graphics workload is processed within the graphics pipeline based on the one or more units of state information.
    Type: Grant
    Filed: April 1, 2011
    Date of Patent: March 1, 2016
    Assignee: NVIDIA Corporation
    Inventors: Jeffrey A. Bolz, Jesse David Hall, Jerome F. Duluk, Jr., Patrick R. Brown, Gregory Scott Palmer
  • Publication number: 20150339799
    Abstract: One embodiment sets forth a method for associating each stencil value included in a stencil buffer with multiple fragments. Components within a graphics processing pipeline use a set of stencil masks to partition the bits of each stencil value. Each stencil mask selects a different subset of bits, and each fragment is strategically associated with both a stencil value and a stencil mask. Before performing stencil actions associated with a fragment, the raster operations unit performs stencil mask operations on the operands. No fragments are associated with both the same stencil mask and the same stencil value. Consequently, no fragments are associated with the same stencil bits included in the stencil buffer. Advantageously, by reducing the number of stencil bits associated with each fragment, certain classes of software applications may reduce the wasted memory associated with stencil buffers in which each stencil value is associated with a single fragment.
    Type: Application
    Filed: August 3, 2015
    Publication date: November 26, 2015
    Inventors: Eric B. LUM, Jerome F. DULUK, JR.
  • Patent number: 9183609
    Abstract: A technique for efficiently rendering content reduces each complex blend mode to a series of basic blend operations. The series of basic blend operations are executed within a recirculating pipeline until a final blended value is computed. The recirculating pipeline is positioned within a color raster operations unit of a graphics processing unit for efficient access to image buffer data.
    Type: Grant
    Filed: December 20, 2012
    Date of Patent: November 10, 2015
    Assignee: NVIDIA Corporation
    Inventors: Rui Bastos, Mark J. Kilgard, William Craig McKnight, Jerome F. Duluk, Jr., Pierre Souillot, Dale L. Kirkland, Christian Amsinck, Joseph Detmer, Christian Rouet, Don Bittel
  • Patent number: 9098925
    Abstract: One embodiment sets forth a method for associating each stencil value included in a stencil buffer with multiple fragments. Components within a graphics processing pipeline use a set of stencil masks to partition the bits of each stencil value. Each stencil mask selects a different subset of bits, and each fragment is strategically associated with both a stencil value and a stencil mask. Before performing stencil actions associated with a fragment, the raster operations unit performs stencil mask operations on the operands. No fragments are associated with both the same stencil mask and the same stencil value. Consequently, no fragments are associated with the same stencil bits included in the stencil buffer. Advantageously, by reducing the number of stencil bits associated with each fragment, certain classes of software applications may reduce the wasted memory associated with stencil buffers in which each stencil value is associated with a single fragment.
    Type: Grant
    Filed: July 15, 2013
    Date of Patent: August 4, 2015
    Assignee: NVIDIA Corporation
    Inventors: Eric B. Lum, Jerome F. Duluk, Jr.
  • Patent number: 9098924
    Abstract: One embodiment sets forth a method for associating each stencil value included in a stencil buffer with multiple fragments. Components within a graphics processing pipeline use a set of stencil masks to partition the bits of each stencil value. Each stencil mask selects a different subset of bits, and each fragment is strategically associated with both a stencil value and a stencil mask. Before performing stencil actions associated with a fragment, the raster operations unit performs stencil mask operations on the operands. No fragments are associated with both the same stencil mask and the same stencil value. Consequently, no fragments are associated with the same stencil bits included in the stencil buffer. Advantageously, by reducing the number of stencil bits associated with each fragment, certain classes of software applications may reduce the wasted memory associated with stencil buffers in which each stencil value is associated with a single fragment.
    Type: Grant
    Filed: July 15, 2013
    Date of Patent: August 4, 2015
    Assignee: NVIDIA CORPORATION
    Inventors: Eric B. Lum, Jerome F. Duluk, Jr.