Patents by Inventor Jonathan Tilton Barron

Jonathan Tilton Barron has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12354300
    Abstract: Provided are systems and methods that invert a trained NeRF model, which stores the structure of a scene or object, to estimate the 6D pose from an image taken with a novel view. 6D pose estimation has a wide range of applications, including visual localization and object pose estimation for robot manipulation.
    Type: Grant
    Filed: November 15, 2021
    Date of Patent: July 8, 2025
    Assignee: GOOGLE LLC
    Inventors: Tsung-Yi Lin, Peter Raymond Florence, Yen-Chen Lin, Jonathan Tilton Barron
  • Publication number: 20250148567
    Abstract: Systems and methods for training a machine-learned model are disclosed herein. The method can include obtaining, by a processor, a plurality of images, each image having a set of parameter values comprising values for a plurality of camera parameters and determining a covariance matrix for the plurality of camera parameters with respect to a plurality of projected points generated via evaluation of a projection function. The method can also include performing a whitening algorithm to identify a preconditioning matrix that, when applied to the sets of parameter values, results in the covariance matrix being approximately equal to an identity matrix and performing an optimization algorithm on the plurality of sets of parameter values, Performing the optimization algorithm can include applying an inverse of the preconditioning matrix to the plurality of sets of parameters in a forward prediction pass and applying the preconditioning matrix in a backward gradient pass.
    Type: Application
    Filed: November 6, 2023
    Publication date: May 8, 2025
    Inventors: Keunhong Park, Ricardo Martin-Brualla, Jonathan Tilton Barron, Philipp Henzler, Benjamin Joseph Mildenhall
  • Publication number: 20250037244
    Abstract: Systems and methods for training a neural radiance field model for noisy scenes can leverage raw noisy images in linear high dynamic range color space to train a neural radiance field model to generate view synthesis of low light and/or high contrast scenes. The trained model can then be utilized to accurately complete view rendering tasks without the preprocessing used for generating low dynamic range images. In some implementations, training on unprocessed data of a low light scene can allow for training a neural radiance field model to generate high quality view renderings of a low light scene.
    Type: Application
    Filed: October 21, 2022
    Publication date: January 30, 2025
    Inventors: Benjamin Joseph Mildenhall, Pratul Preeti Srinivasan, Jonathan Tilton Barron, Richardo Martin-Brualla, Lars Peter Johannes Hedman
  • Publication number: 20250014236
    Abstract: Provided are systems and methods for synthesizing novel views of complex scenes (e.g., outdoor scenes). In some implementations, the systems and methods can include or use machine-learned models that are capable of learning from unstructured and/or unconstrained collections of imagery such as, for example, “in the wild” photographs. In particular, example implementations of the present disclosure can learn a volumetric scene density and radiance represented by a machine-learned model such as one or more multilayer perceptrons (MLPs).
    Type: Application
    Filed: September 20, 2024
    Publication date: January 9, 2025
    Inventors: Daniel Christopher Duckworth, Alexey Dosovitskiy, Ricardo Martin-Brualla, Jonathan Tilton Barron, Noha Radwan, Seyed Mohammad Mehdi Sajjadi
  • Publication number: 20240420413
    Abstract: Systems and methods for view synthesis and three-dimensional reconstruction can learn an environment by utilizing a plurality of images of an environment and depth data. The use of depth data can be helpful when the quantity of images and different angles may be limited. For example, large outdoor environments can be difficult to learn due to the size, the varying image exposures, and the limited variance in view direction changes. The systems and methods can leverage a plurality of panoramic images and corresponding lidar data to accurately learn a large outdoor environment to then generate view synthesis outputs and three-dimensional reconstruction outputs. Training may include the use of an exposure correction network to address lighting exposure differences between training images.
    Type: Application
    Filed: August 23, 2024
    Publication date: December 19, 2024
    Inventors: Konstantinos Rematas, Thomas Allen Funkhouser, Vittorio Carlo Ferrari, Andrew Huaming Liu, Andrea Tagliasacchi, Pratul Preeti Srinivasan, Jonathan Tilton Barron
  • Patent number: 12106428
    Abstract: Systems and methods for view synthesis and three-dimensional reconstruction can learn an environment by utilizing a plurality of images of an environment and depth data. The use of depth data can be helpful when the quantity of images and different angles may be limited. For example, large outdoor environments can be difficult to learn due to the size, the varying image exposures, and the limited variance in view direction changes. The systems and methods can leverage a plurality of panoramic images and corresponding lidar data to accurately learn a large outdoor environment to then generate view synthesis outputs and three-dimensional reconstruction outputs. Training may include the use of an exposure correction network to address lighting exposure differences between training images.
    Type: Grant
    Filed: March 4, 2022
    Date of Patent: October 1, 2024
    Assignee: GOOGLE LLC
    Inventors: Konstantinos Rematas, Thomas Allen Funkhouser, Vittorio Carlo Ferrari, Andrew Huaming Liu, Andrea Tagliasacchi, Pratul Preeti Srinivasan, Jonathan Tilton Barron
  • Publication number: 20240320912
    Abstract: A fractional training process can be performed training images to an instance of a machine-learned generative image model to obtain a partially trained instance of the model. A fractional optimization process can be performed with the partially trained instance to an instance of a machine-learned three-dimensional (3D) implicit representation model obtain a partially optimized instance of the model. Based on the plurality of training images, pseudo multi-view subject images can be generated with the partially optimized instance of the 3D implicit representation model and a fully trained instance of the generative image model; The partially trained instance of the model can be trained with a set of training data. The partially optimized instance of the machine-learned 3D implicit representation model can be trained with the machine-learned multi-view image model.
    Type: Application
    Filed: March 20, 2024
    Publication date: September 26, 2024
    Inventors: Yuanzhen Li, Amit Raj, Varun Jampani, Benjamin Joseph Mildenhall, Benjamin Michael Poole, Jonathan Tilton Barron, Kfir Aberman, Michael Niemeyer, Michael Rubinstein, Nataniel Ruiz Gutierrez, Shiran Elyahu Zada, Srinivas Kaza
  • Patent number: 12100074
    Abstract: Provided are systems and methods for synthesizing novel views of complex scenes (e.g., outdoor scenes). In some implementations, the systems and methods can include or use machine-learned models that are capable of learning from unstructured and/or unconstrained collections of imagery such as, for example, “in the wild” photographs. In particular, example implementations of the present disclosure can learn a volumetric scene density and radiance represented by a machine-learned model such as one or more multilayer perceptrons (MLPs).
    Type: Grant
    Filed: June 1, 2023
    Date of Patent: September 24, 2024
    Assignee: GOOGLE LLC
    Inventors: Daniel Christopher Duckworth, Alexey Dosovitskiy, Ricardo Martin-Brualla, Jonathan Tilton Barron, Noha Radwan, Seyed Mohammad Mehdi Sajjadi
  • Publication number: 20240273811
    Abstract: Systems and methods for training a neural radiance field model can include the use of image patches for ground truth training. For example, the systems and methods can include generating patch renderings with a neural radiance field model, comparing the patch renderings to ground truth patches from ground truth images, and adjusting one or more parameters based on the comparison. Additionally and/or alternatively, the systems and methods can include the utilization of a flow model for mitigating and/or minimizing artifact generation.
    Type: Application
    Filed: October 24, 2022
    Publication date: August 15, 2024
    Inventors: Noha Radwan, Jonathan Tilton Barron, Benjamin Joseph Mildenhall, Seyed Mohammad Mehdi Sajjadi, Michael Niemeyer
  • Publication number: 20240005590
    Abstract: Techniques of image synthesis using a neural radiance field (NeRF) includes generating a deformation model of movement experienced by a subject in a non-rigidly deforming scene. For example, when an image synthesis system uses NeRFs, the system takes as input multiple poses of subjects for training data. In contrast to conventional NeRFs, the technical solution first expresses the positions of the subjects from various perspectives in an observation frame. The technical solution then involves deriving a deformation model, i.e., a mapping between the observation frame and a canonical frame in which the subject's movements are taken into account. This mapping is accomplished using latent deformation codes for each pose that are determined using a multilayer perceptron (MLP). A NeRF is then derived from positions and casted ray directions in the canonical frame using another MLP. New poses for the subject may then be derived using the NeRF.
    Type: Application
    Filed: January 14, 2021
    Publication date: January 4, 2024
    Inventors: Ricardo Martin Brualla, Keunhong Park, Utkarsh Sinha, Sofien Bouaziz, Daniel Goldman, Jonathan Tilton Barron, Steven Maxwell Seitz
  • Publication number: 20230360182
    Abstract: Apparatus and methods related to applying lighting models to images of objects are provided. An example method includes applying a geometry model to an input image to determine a surface orientation map indicative of a distribution of lighting on an object based on a surface geometry. The method further includes applying an environmental light estimation model to the input image to determine a direction of synthetic lighting to be applied to the input image. The method also includes applying, based on the surface orientation map and the direction of synthetic lighting, a light energy model to determine a quotient image indicative of an amount of light energy to be applied to each pixel of the input image. The method additionally includes enhancing, based on the quotient image, a portion of the input image. One or more neural networks can be trained to perform one or more of the aforementioned aspects.
    Type: Application
    Filed: May 17, 2021
    Publication date: November 9, 2023
    Inventors: Sean Ryan Francesco Fanello, Yun-Ta Tsai, Rohit Kumar Pandey, Paul Debevec, Michael Milne, Chloe LeGendre, Jonathan Tilton Barron, Christoph Rhemann, Sofien Bouaziz, Navin Padman Sarma
  • Publication number: 20230306655
    Abstract: Provided are systems and methods for synthesizing novel views of complex scenes (e.g., outdoor scenes). In some implementations, the systems and methods can include or use machine-learned models that are capable of learning from unstructured and/or unconstrained collections of imagery such as, for example, “in the wild” photographs. In particular, example implementations of the present disclosure can learn a volumetric scene density and radiance represented by a machine-learned model such as one or more multilayer perceptrons (MLPs).
    Type: Application
    Filed: June 1, 2023
    Publication date: September 28, 2023
    Inventors: Daniel Christopher Duckworth, Alexey Dosovitskiy, Ricardo Martin-Brualla, Jonathan Tilton Barron, Noha Radwan, Seyed Mohammad Mehdi Sajjadi
  • Publication number: 20230281913
    Abstract: Systems and methods for view synthesis and three-dimensional reconstruction can learn an environment by utilizing a plurality of images of an environment and depth data. The use of depth data can be helpful when the quantity of images and different angles may be limited. For example, large outdoor environments can be difficult to learn due to the size, the varying image exposures, and the limited variance in view direction changes. The systems and methods can leverage a plurality of panoramic images and corresponding lidar data to accurately learn a large outdoor environment to then generate view synthesis outputs and three-dimensional reconstruction outputs. Training may include the use of an exposure correction network to address lighting exposure differences between training images.
    Type: Application
    Filed: March 4, 2022
    Publication date: September 7, 2023
    Inventors: Konstantinos Rematas, Thomas Allen Funkhouser, Vittorio Carlo Ferrari, Andrew Huaming Liu, Andrea Tagliasacchi, Pratul Preeti Srinivasan, Jonathan Tilton Barron
  • Publication number: 20230230275
    Abstract: Provided are systems and methods that invert a trained NeRF model, which stores the structure of a scene or object, to estimate the 6D pose from an image taken with a novel view. 6D pose estimation has a wide range of applications, including visual localization and object pose estimation for robot manipulation.
    Type: Application
    Filed: November 15, 2021
    Publication date: July 20, 2023
    Inventors: Tsung-Yi Lin, Peter Raymond Florence, Yen-Chen Lin, Jonathan Tilton Barron
  • Patent number: 11704844
    Abstract: Provided are systems and methods for synthesizing novel views of complex scenes (e.g., outdoor scenes). In some implementations, the systems and methods can include or use machine-learned models that are capable of learning from unstructured and/or unconstrained collections of imagery such as, for example, “in the wild” photographs. In particular, example implementations of the present disclosure can learn a volumetric scene density and radiance represented by a machine-learned model such as one or more multilayer perceptrons (MLPs).
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: July 18, 2023
    Assignee: GOOGLE LLC
    Inventors: Daniel Christopher Duckworth, Alexey Dosovitskiy, Ricardo Martin Brualla, Jonathan Tilton Barron, Noha Waheed Ahmed Radwan, Seyed Mohammad Mehdi Sajjadi
  • Publication number: 20230177822
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for rendering a new image that depicts a scene from a perspective of a camera at a new camera viewpoint.
    Type: Application
    Filed: December 2, 2022
    Publication date: June 8, 2023
    Inventors: Vincent Michael Casser, Henrik Kretzschmar, Matthew Justin Tancik, Sabeek Mani Pradhan, Benjamin Joseph Mildenhall, Pratul Preeti Srinivasan, Jonathan Tilton Barron
  • Publication number: 20220237834
    Abstract: Provided are systems and methods for synthesizing novel views of complex scenes (e.g., outdoor scenes). In some implementations, the systems and methods can include or use machine-learned models that are capable of learning from unstructured and/or unconstrained collections of imagery such as, for example, “in the wild” photographs. In particular, example implementations of the present disclosure can learn a volumetric scene density and radiance represented by a machine-learned model such as one or more multilayer perceptrons (MLPs).
    Type: Application
    Filed: April 18, 2022
    Publication date: July 28, 2022
    Inventors: Daniel Christopher Duckworth, Alexey Dosovitskiy, Ricardo Martin Brualla, Jonathan Tilton Barron, Noha Waheed Ahmed Radwan, Seyed Mohammad Mehdi Sajjadi
  • Patent number: 11308659
    Abstract: Provided are systems and methods for synthesizing novel views of complex scenes (e.g., outdoor scenes). In some implementations, the systems and methods can include or use machine-learned models that are capable of learning from unstructured and/or unconstrained collections of imagery such as, for example, “in the wild” photographs. In particular, example implementations of the present disclosure can learn a volumetric scene density and radiance represented by a machine-learned model such as one or more multilayer perceptrons (MLPs).
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: April 19, 2022
    Assignee: GOOGLE LLC
    Inventors: Daniel Christopher Duckworth, Seyed Mohammad Mehdi Sajjadi, Jonathan Tilton Barron, Noha Radwan, Alexey Dosovitskiy, Ricardo Martin-Brualla
  • Publication number: 20220036602
    Abstract: Provided are systems and methods for synthesizing novel views of complex scenes (e.g., outdoor scenes). In some implementations, the systems and methods can include or use machine-learned models that are capable of learning from unstructured and/or unconstrained collections of imagery such as, for example, “in the wild” photographs. In particular, example implementations of the present disclosure can learn a volumetric scene density and radiance represented by a machine-learned model such as one or more multilayer perceptrons (MLPs).
    Type: Application
    Filed: July 30, 2021
    Publication date: February 3, 2022
    Inventors: Daniel Christopher Duckworth, Seyed Mohammad Mehdi Sajjadi, Jonathan Tilton Barron, Noha Waheed Ahmed Radwan, Alexey Dosovitskiy, Ricardo Martin-Brualla
  • Patent number: 10897609
    Abstract: The present disclosure relates to methods and systems that may improve and/or modify images captured using multiscopic image capture systems. In an example embodiment, burst image data is captured via a multiscopic image capture system. The burst image data may include at least one image pair. The at least one image pair is aligned based on at least one rectifying homography function. The at least one aligned image pair is warped based on a stereo disparity between the respective images of the image pair. The warped and aligned images are then stacked and a denoising algorithm is applied. Optionally, a high dynamic range algorithm may be applied to at least one output image of the aligned, warped, and denoised images.
    Type: Grant
    Filed: November 11, 2019
    Date of Patent: January 19, 2021
    Assignee: Google LLC
    Inventors: Jonathan Tilton Barron, Stephen Joseph DiVerdi, Ryan Geiss