Patents by Inventor Christopher Ryan Wyman

Christopher Ryan Wyman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11948246
    Abstract: Apparatuses, systems, and techniques to render computer graphics. In at least one embodiment, a first one or more lights are selected from among lights in a virtual scene to be rendered as a frame of graphics, and a second one or more lights are selected from among lights used to render one or more pixels in at least one of a prior frame or the current frame. A pixel of the current frame is rendered using the first and second one or more lights, and a light is selected for reuse in rendering a subsequent frame from among the first and second one or more lights.
    Type: Grant
    Filed: March 24, 2022
    Date of Patent: April 2, 2024
    Assignee: NVIDIA CORPORATION
    Inventor: Christopher Ryan Wyman
  • Patent number: 11941745
    Abstract: Disclosed approaches may leverage the actual spatial and reflective properties of a virtual environment—such as the size, shape, and orientation of a bidirectional reflectance distribution function (BRDF) lobe of a light path and its position relative to a reflection surface, a virtual screen, and a virtual camera—to produce, for a pixel, an anisotropic kernel filter having dimensions and weights that accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface. In order to accomplish this, geometry may be computed that corresponds to a projection of a reflection of the BRDF lobe below the surface along a view vector to the pixel. Using this approach, the dimensions of the anisotropic filter kernel may correspond to the BRDF lobe to accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface.
    Type: Grant
    Filed: June 28, 2022
    Date of Patent: March 26, 2024
    Assignee: NVIDIA Corporation
    Inventors: Shiqiu Liu, Christopher Ryan Wyman, Jon Hasselgren, Jacob Munkberg, Ignacio Llamas
  • Patent number: 11925860
    Abstract: This application discloses techniques for generating and querying projective hash maps. More specifically, projective hash maps can be used for spatial hashing of data related to N-dimensional points. Each point is projected onto a projection surface to convert the three-dimensional (3D) coordinates for the point to two-dimensional (2D) coordinates associated with the projection surface. Hash values based on the 2D coordinates are then used as an index to store data in the projective hash map. Utilizing the 2D coordinates rather than the 3D coordinates allows for more efficient searches to be performed to locate points in the 3D space. In particular, projective hash maps can be utilized by graphics applications for generating images, and the improved efficiency can, for example, enable a game streaming application on a server to render images transmitted to a user device via a network at faster frame rates.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: March 12, 2024
    Assignee: NVIDIA Corporation
    Inventors: Marco Salvi, Jacopo Pantaleoni, Aaron Eliot Lefohn, Christopher Ryan Wyman, Pascal Gautron
  • Publication number: 20230360319
    Abstract: Devices, systems, and techniques to incorporate lighting effects into computer-generated graphics. In at least one embodiment, a virtual scene comprising a plurality of lights is rendered by randomly sampling a set of lights from among the plurality of lights prior to rendering a frame of graphics. A subset of the set of lights is selected and used to render pixels within one or more portions of the frame.
    Type: Application
    Filed: May 3, 2023
    Publication date: November 9, 2023
    Inventors: Christopher Ryan Wyman, Robert Anthony Alfieri, William Parsons Newhall, Jr., Peter Schuyler Shirley
  • Patent number: 11663773
    Abstract: Devices, systems, and techniques to incorporate lighting effects into computer-generated graphics. In at least one embodiment, a virtual scene comprising a plurality of lights is rendered by randomly sampling a set of lights from among the plurality of lights prior to rendering a frame of graphics. A subset of the set of lights is selected and used to render pixels within one or more portions of the frame.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: May 30, 2023
    Assignee: NVIDIA CORPORATION
    Inventors: Christopher Ryan Wyman, Robert Anthony Alfieri, William Parsons Newhall, Jr., Peter Schuyler Shirley
  • Publication number: 20220395748
    Abstract: This application discloses techniques for generating and querying projective hash maps. More specifically, projective hash maps can be used for spatial hashing of data related to N-dimensional points. Each point is projected onto a projection surface to convert the three-dimensional (3D) coordinates for the point to two-dimensional (2D) coordinates associated with the projection surface. Hash values based on the 2D coordinates are then used as an index to store data in the projective hash map. Utilizing the 2D coordinates rather than the 3D coordinates allows for more efficient searches to be performed to locate points in the 3D space. In particular, projective hash maps can be utilized by graphics applications for generating images, and the improved efficiency can, for example, enable a game streaming application on a server to render images transmitted to a user device via a network at faster frame rates.
    Type: Application
    Filed: June 9, 2021
    Publication date: December 15, 2022
    Inventors: Marco Salvi, Jacopo Pantaleoni, Aaron Eliot Lefohn, Christopher Ryan Wyman, Pascal Gautron
  • Publication number: 20220327765
    Abstract: Disclosed approaches may leverage the actual spatial and reflective properties of a virtual environment—such as the size, shape, and orientation of a bidirectional reflectance distribution function (BRDF) lobe of a light path and its position relative to a reflection surface, a virtual screen, and a virtual camera—to produce, for a pixel, an anisotropic kernel filter having dimensions and weights that accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface. In order to accomplish this, geometry may be computed that corresponds to a projection of a reflection of the BRDF lobe below the surface along a view vector to the pixel. Using this approach, the dimensions of the anisotropic filter kernel may correspond to the BRDF lobe to accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface.
    Type: Application
    Filed: June 28, 2022
    Publication date: October 13, 2022
    Inventors: Shiqiu Liu, Christopher Ryan Wyman, Jon Hasselgren, Jacob Munkberg, Ignacio Llamas
  • Publication number: 20220327770
    Abstract: Apparatuses, systems, and techniques to render computer graphics. In at least one embodiment, a first one or more lights are selected from among lights in a virtual scene to be rendered as a frame of graphics, and a second one or more lights are selected from among lights used to render one or more pixels in at least one of a prior frame or the current frame. A pixel of the current frame is rendered using the first and second one or more lights, and a light is selected for reuse in rendering a subsequent frame from among the first and second one or more lights.
    Type: Application
    Filed: March 24, 2022
    Publication date: October 13, 2022
    Inventor: Christopher Ryan Wyman
  • Patent number: 11373359
    Abstract: Disclosed approaches may leverage the actual spatial and reflective properties of a virtual environment—such as the size, shape, and orientation of a bidirectional reflectance distribution function (BRDF) lobe of a light path and its position relative to a reflection surface, a virtual screen, and a virtual camera—to produce, for a pixel, an anisotropic kernel filter having dimensions and weights that accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface. In order to accomplish this, geometry may be computed that corresponds to a projection of a reflection of the BRDF lobe below the surface along a view vector to the pixel. Using this approach, the dimensions of the anisotropic filter kernel may correspond to the BRDF lobe to accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: June 28, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Shiqiu Liu, Christopher Ryan Wyman, Jon Hasselgren, Jacob Munkberg, Ignacio Llamas
  • Publication number: 20220198746
    Abstract: A global illumination data structure (e.g., a data structure created to store global illumination information for geometry within a scene to be rendered) is computed for the scene. Additionally, reservoir-based spatiotemporal importance resampling (RESTIR) is used to perform illumination gathering, utilizing the global illumination data structure. The illumination gathering includes identifying light values for points within the scene, where one or more points are selected within the scene based on the light values in order to perform ray tracing during the rendering of the scene.
    Type: Application
    Filed: March 11, 2022
    Publication date: June 23, 2022
    Inventors: Christopher Ryan Wyman, Morgan McGuire, Peter Schuyler Shirley, Aaron Eliot Lefohn
  • Patent number: 11315310
    Abstract: A global illumination data structure (e.g., a data structure created to store global illumination information for geometry within a scene to be rendered) is computed for the scene. Additionally, reservoir-based spatiotemporal importance resampling (RESTIR) is used to perform illumination gathering, utilizing the global illumination data structure. The illumination gathering includes identifying light values for points within the scene, where one or more points are selected within the scene based on the light values in order to perform ray tracing during the rendering of the scene.
    Type: Grant
    Filed: January 19, 2021
    Date of Patent: April 26, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Christopher Ryan Wyman, Morgan McGuire, Peter Schuyler Shirley, Aaron Eliot Lefohn
  • Publication number: 20220058861
    Abstract: Devices, systems, and techniques to incorporate lighting effects into computer-generated graphics. In at least one embodiment, a virtual scene comprising a plurality of lights is rendered by randomly sampling a set of lights from among the plurality of lights prior to rendering a frame of graphics. A subset of the set of lights is selected and used to render pixels within one or more portions of the frame.
    Type: Application
    Filed: February 24, 2021
    Publication date: February 24, 2022
    Inventors: Christopher Ryan Wyman, Robert Anthony Alfieri, William Parsons Newhall, JR., Peter Schuyler Shirley
  • Publication number: 20220058851
    Abstract: Devices, systems, and techniques to incorporate lighting effects into computer-generated graphics. In at least one embodiment, a virtual scene comprising a plurality of lights is rendered by subdividing the virtual area and stored, in a record corresponding to a subdivision of the virtual area, information indicative of one or more lights in the virtual area selected based on a stochastic model. Pixels near a subdivision are rendered based on the light information stored in the subdivision.
    Type: Application
    Filed: September 24, 2020
    Publication date: February 24, 2022
    Inventors: Jakub Boksansky, Paula Eveliina Jukarainen, Christopher Ryan Wyman
  • Publication number: 20210287426
    Abstract: A global illumination data structure (e.g., a data structure created to store global illumination information for geometry within a scene to be rendered) is computed for the scene. Additionally, reservoir-based spatiotemporal importance resampling (RESTIR) is used to perform illumination gathering, utilizing the global illumination data structure. The illumination gathering includes identifying light values for points within the scene, where one or more points are selected within the scene based on the light values in order to perform ray tracing during the rendering of the scene.
    Type: Application
    Filed: January 19, 2021
    Publication date: September 16, 2021
    Inventors: Christopher Ryan Wyman, Morgan McGuire, Peter Schuyler Shirley, Aaron Eliot Lefohn
  • Publication number: 20200349755
    Abstract: Disclosed approaches may leverage the actual spatial and reflective properties of a virtual environment—such as the size, shape, and orientation of a bidirectional reflectance distribution function (BRDF) lobe of a light path and its position relative to a reflection surface, a virtual screen, and a virtual camera—to produce, for a pixel, an anisotropic kernel filter having dimensions and weights that accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface. In order to accomplish this, geometry may be computed that corresponds to a projection of a reflection of the BRDF lobe below the surface along a view vector to the pixel. Using this approach, the dimensions of the anisotropic filter kernel may correspond to the BRDF lobe to accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface.
    Type: Application
    Filed: July 22, 2020
    Publication date: November 5, 2020
    Inventors: Shiqiu Liu, Christopher Ryan Wyman, Jon Hasselgren, Jacob Munkberg, Ignacio Llamas
  • Patent number: 10776985
    Abstract: Disclosed approaches may leverage the actual spatial and reflective properties of a virtual environment—such as the size, shape, and orientation of a bidirectional reflectance distribution function (BRDF) lobe of a light path and its position relative to a reflection surface, a virtual screen, and a virtual camera—to produce, for a pixel, an anisotropic kernel filter having dimensions and weights that accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface. In order to accomplish this, geometry may be computed that corresponds to a projection of a reflection of the BRDF lobe below the surface along a view vector to the pixel. Using this approach, the dimensions of the anisotropic filter kernel may correspond to the BRDF lobe to accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: September 15, 2020
    Assignee: NVIDIA Corporation
    Inventors: Shiqiu Liu, Christopher Ryan Wyman, Jon Hasselgren, Jacob Munkberg, Ignacio Llamas
  • Patent number: 10600167
    Abstract: A method, computer readable medium, and system are disclosed for performing spatiotemporal filtering. The method includes the steps of applying, utilizing a processor, a temporal filter of a filtering pipeline to a current image frame, using a temporal reprojection, to obtain a color and auxiliary information for each pixel within the current image frame, providing the auxiliary information for each pixel within the current image frame to one or more subsequent filters of the filtering pipeline, and creating a reconstructed image for the current image frame, utilizing the one or more subsequent filters of the filtering pipeline.
    Type: Grant
    Filed: January 18, 2018
    Date of Patent: March 24, 2020
    Assignee: NVIDIA CORPORATION
    Inventors: Christoph H. Schied, Marco Salvi, Anton S. Kaplanyan, Aaron Eliot Lefohn, John Matthew Burgess, Anjul Patney, Christopher Ryan Wyman
  • Patent number: 10438400
    Abstract: A method, computer readable medium, and system are disclosed for rendering images utilizing a foveated rendering algorithm with post-process filtering to enhance a contrast of the foveated image. The method includes the step of receiving a three-dimensional scene, rendering the 3D scene according to a foveated rendering algorithm to generate a foveated image, and filtering the foveated image using a contrast-enhancing filter to generate a filtered foveated image. The foveated rendering algorithm may incorporate aspects of coarse pixel shading, mipmapped texture maps, linear efficient anti-aliased normal maps, exponential variance shadow maps, and specular anti-aliasing techniques. The foveated rendering algorithm may also be combined with temporal anti-aliasing techniques to further reduce artifacts in the foveated image.
    Type: Grant
    Filed: March 8, 2017
    Date of Patent: October 8, 2019
    Assignee: NVIDIA Corporation
    Inventors: Anjul Patney, Marco Salvi, Joohwan Kim, Anton S. Kaplanyan, Christopher Ryan Wyman, Nir Benty, David Patrick Luebke, Aaron Eliot Lefohn
  • Publication number: 20190287294
    Abstract: Disclosed approaches may leverage the actual spatial and reflective properties of a virtual environment—such as the size, shape, and orientation of a bidirectional reflectance distribution function (BRDF) lobe of a light path and its position relative to a reflection surface, a virtual screen, and a virtual camera—to produce, for a pixel, an anisotropic kernel filter having dimensions and weights that accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface. In order to accomplish this, geometry may be computed that corresponds to a projection of a reflection of the BRDF lobe below the surface along a view vector to the pixel. Using this approach, the dimensions of the anisotropic filter kernel may correspond to the BRDF lobe to accurately reflect the spatial characteristics of the virtual environment as well as the reflective properties of the surface.
    Type: Application
    Filed: March 15, 2019
    Publication date: September 19, 2019
    Inventors: Shiqiu Liu, Christopher Ryan Wyman, Jon Hasselgren, Jacob Munkberg, Ignacio Llamas
  • Patent number: 10417813
    Abstract: A method for generating temporally stable hash values reduces visual artifacts associated with stochastic sampling of data for graphics applications. A given hash value can be generated from a scaled and discretized object-space for a geometric object within a scene. Through appropriate scaling, the hash value can be discretized and remain constant within a threshold distance from a pixel center. As the geometric object moves within the scene, a hash value associated with a given feature of the geometric object remains constant because the hash value is generated using an object-space coordinate anchored to the feature. In one embodiment, alpha testing threshold values are assigned random, but temporally stable hash output values generated using object-space coordinate positions for primitive fragments undergoing alpha testing. Alpha tested fragments are temporally stable, beneficially improving image quality.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: September 17, 2019
    Assignee: NVIDIA Corporation
    Inventors: Christopher Ryan Wyman, Morgan McGuire