Patents by Inventor Thomas Allen Funkhouser

Thomas Allen Funkhouser has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240096001
    Abstract: Provided are machine learning models that generate geometry-free neural scene representations through efficient object-centric novel-view synthesis. In particular, one example aspect of the present disclosure provides a novel framework in which an encoder model (e.g., an encoder transformer network) processes one or more RGB images (with or without pose) to produce a fully latent scene representation that can be passed to a decoder model (e.g., a decoder transformer network). Given one or more target poses, the decoder model can synthesize images in a single forward pass. In some example implementations, because transformers are used rather than convolutional or MLP networks, the encoder can learn an attention model that extracts enough 3D information about a scene from a small set of images to render novel views with correct projections, parallax, occlusions, and even semantics, without explicit geometry.
    Type: Application
    Filed: November 15, 2022
    Publication date: March 21, 2024
    Inventors: Seyed Mohammad Mehdi Sajjadi, Henning Meyer, Etienne François Régis Pot, Urs Michael Bergmann, Klaus Greff, Noha Radwan, Suhani Deepak-Ranu Vora, Mario Lu¢i¢, Daniel Christopher Duckworth, Thomas Allen Funkhouser, Andrea Tagliasacchi
  • Publication number: 20230281913
    Abstract: Systems and methods for view synthesis and three-dimensional reconstruction can learn an environment by utilizing a plurality of images of an environment and depth data. The use of depth data can be helpful when the quantity of images and different angles may be limited. For example, large outdoor environments can be difficult to learn due to the size, the varying image exposures, and the limited variance in view direction changes. The systems and methods can leverage a plurality of panoramic images and corresponding lidar data to accurately learn a large outdoor environment to then generate view synthesis outputs and three-dimensional reconstruction outputs. Training may include the use of an exposure correction network to address lighting exposure differences between training images.
    Type: Application
    Filed: March 4, 2022
    Publication date: September 7, 2023
    Inventors: Konstantinos Rematas, Thomas Allen Funkhouser, Vittorio Carlo Ferrari, Andrew Huaming Liu, Andrea Tagliasacchi, Pratul Preeti Srinivasan, Jonathan Tilton Barron