Patents by Inventor Alex John Bauld Evans

Alex John Bauld Evans has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240257437
    Abstract: Embodiments of the present disclosure relate to real-time neural appearance models. Using a neural decoder, scenes are rendered in real-time with complex material appearance previously reserved for offline use. Learned hierarchical textures representing the material properties are encoded as latent codes. When a ray is cast and intersects with geometry in the scene, the intersection point is mapped to one of the latent codes. The latent code is interpreted using neural decoders, which produce reflectance values and importance-sampled directions that can be used to determine a pixel color.
    Type: Application
    Filed: January 22, 2024
    Publication date: August 1, 2024
    Inventors: Karthik Vaidyanathan, Alex John Bauld Evans, Jan Novák, Andrea Weidlich, Fabrice Pierre Armand Rousselle, Aaron Eliot Lefohn, Franz Petrik Clarberg, Benedikt Bitterli, Tizian Lucien Zeltner
  • Publication number: 20240257460
    Abstract: Apparatuses, systems, and techniques to generate pixels based on other pixels. In at least one embodiment, one or more neural networks are used to generate one or more pixels based, at least in part, on sets of pixels surrounding the one or more pixels.
    Type: Application
    Filed: November 18, 2022
    Publication date: August 1, 2024
    Inventors: Chen-Hsuan Lin, Zhaoshuo Li, Thomas Müller-Höhne, Alex John Bauld Evans, Ming-Yu Liu, Alexander Georg Keller
  • Patent number: 11967024
    Abstract: A technique is described for extracting or constructing a three-dimensional (3D) model from multiple two-dimensional (2D) images. In an embodiment, a foreground segmentation mask or depth field may be provided as an additional supervision input with each 2D image. In an embodiment, the foreground segmentation mask or depth field is automatically generated for each 2D image. The constructed 3D model comprises a triangular mesh topology, materials, and environment lighting. The constructed 3D model is represented in a format that can be directly edited and/or rendered by conventional application programs, such as digital content creation (DCC) tools. For example, the constructed 3D model may be represented as a triangular surface mesh (with arbitrary topology), a set of 2D textures representing spatially-varying material parameters, and an environment map. Furthermore, the constructed 3D model may be included in 3D scenes and interacts realistically with other objects.
    Type: Grant
    Filed: May 30, 2022
    Date of Patent: April 23, 2024
    Assignee: NVIDIA Corporation
    Inventors: Carl Jacob Munkberg, Jon Niklas Theodor Hasselgren, Tianchang Shen, Jun Gao, Wenzheng Chen, Alex John Bauld Evans, Thomas Müller-Höhne, Sanja Fidler
  • Publication number: 20230140460
    Abstract: A technique is described for extracting or constructing a three-dimensional (3D) model from multiple two-dimensional (2D) images. In an embodiment, a foreground segmentation mask or depth field may be provided as an additional supervision input with each 2D image. In an embodiment, the foreground segmentation mask or depth field is automatically generated for each 2D image. The constructed 3D model comprises a triangular mesh topology, materials, and environment lighting. The constructed 3D model is represented in a format that can be directly edited and/or rendered by conventional application programs, such as digital content creation (DCC) tools. For example, the constructed 3D model may be represented as a triangular surface mesh (with arbitrary topology), a set of 2D textures representing spatially-varying material parameters, and an environment map. Furthermore, the constructed 3D model may be included in 3D scenes and interacts realistically with other objects.
    Type: Application
    Filed: May 30, 2022
    Publication date: May 4, 2023
    Inventors: Carl Jacob Munkberg, Jon Niklas Theodor Hasselgren, Tianchang Shen, Jun Gao, Wenzheng Chen, Alex John Bauld Evans, Thomas Müller-Höhne, Sanja Fidler
  • Publication number: 20230052645
    Abstract: Neural network performance is improved in terms of training speed and/or accuracy by encoding (mapping) inputs to the neural network into a higher dimensional space via a hash function. The input comprises coordinates used to identify a point within a d-dimensional space (e.g., 3D space). The point is quantized and a set of vertex coordinates corresponding to the point are input to a hash function. For example, for d=3, space may be partitioned into axis-aligned voxels of identical size and vertex coordinates of a voxel containing the point are input to the hash function to produce a set of encoded coordinates. The set of encoded coordinates is used to lookup D-dimensional feature vectors in a table of size T that have been learned. The learned feature vectors are filtered (e.g., linearly interpolated, etc.) based on the coordinates of the point to compute a feature vector corresponding to the point.
    Type: Application
    Filed: February 15, 2022
    Publication date: February 16, 2023
    Inventors: Alexander Georg Keller, Alex John Bauld Evans, Thomas Müller-Höhne, Faycal Ait Aoudia, Nikolaus Binder, Jakob Hoydis, Christoph Hermann Schied, Sebastian Cammerer, Matthijs van Keirsbilck, Guillermo Anibal Marcus Martinez