Patents by Inventor Seyed Mohammad Mehdi Sajjadi

Seyed Mohammad Mehdi Sajjadi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240119697
    Abstract: Example embodiments of the present disclosure provide an example computer-implemented method for constructing a three-dimensional semantic segmentation of a scene from two-dimensional inputs. The example method includes obtaining, by a computing system comprising one or more processors, an image set comprising one or more views of a subject scene. The example method includes generating, by the computing system and based at least in part on the image set, a scene representation describing the subject scene in three dimensions. The example method includes generating, by the computing system and using a machine-learned semantic segmentation model framework, a multidimensional field of probability distributions over semantic categories, the multidimensional field defined over the three dimensions of the subject scene. The example method includes outputting, by the computing system, classification data for at least one location in the subject scene.
    Type: Application
    Filed: October 10, 2022
    Publication date: April 11, 2024
    Inventors: Daniel Christopher Duckworth, Suhani Deepak-Ranu Vora, Noha Radwan, Klaus Greff, Henning Meyer, Kyle Adam Genova, Seyed Mohammad Mehdi Sajjadi, Etienne François Régis Pot, Andrea Tagliasacchi
  • Publication number: 20240096001
    Abstract: Provided are machine learning models that generate geometry-free neural scene representations through efficient object-centric novel-view synthesis. In particular, one example aspect of the present disclosure provides a novel framework in which an encoder model (e.g., an encoder transformer network) processes one or more RGB images (with or without pose) to produce a fully latent scene representation that can be passed to a decoder model (e.g., a decoder transformer network). Given one or more target poses, the decoder model can synthesize images in a single forward pass. In some example implementations, because transformers are used rather than convolutional or MLP networks, the encoder can learn an attention model that extracts enough 3D information about a scene from a small set of images to render novel views with correct projections, parallax, occlusions, and even semantics, without explicit geometry.
    Type: Application
    Filed: November 15, 2022
    Publication date: March 21, 2024
    Inventors: Seyed Mohammad Mehdi Sajjadi, Henning Meyer, Etienne François Régis Pot, Urs Michael Bergmann, Klaus Greff, Noha Radwan, Suhani Deepak-Ranu Vora, Mario Lu¢i¢, Daniel Christopher Duckworth, Thomas Allen Funkhouser, Andrea Tagliasacchi
  • Publication number: 20230306655
    Abstract: Provided are systems and methods for synthesizing novel views of complex scenes (e.g., outdoor scenes). In some implementations, the systems and methods can include or use machine-learned models that are capable of learning from unstructured and/or unconstrained collections of imagery such as, for example, “in the wild” photographs. In particular, example implementations of the present disclosure can learn a volumetric scene density and radiance represented by a machine-learned model such as one or more multilayer perceptrons (MLPs).
    Type: Application
    Filed: June 1, 2023
    Publication date: September 28, 2023
    Inventors: Daniel Christopher Duckworth, Alexey Dosovitskiy, Ricardo Martin-Brualla, Jonathan Tilton Barron, Noha Radwan, Seyed Mohammad Mehdi Sajjadi
  • Patent number: 11704844
    Abstract: Provided are systems and methods for synthesizing novel views of complex scenes (e.g., outdoor scenes). In some implementations, the systems and methods can include or use machine-learned models that are capable of learning from unstructured and/or unconstrained collections of imagery such as, for example, “in the wild” photographs. In particular, example implementations of the present disclosure can learn a volumetric scene density and radiance represented by a machine-learned model such as one or more multilayer perceptrons (MLPs).
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: July 18, 2023
    Assignee: GOOGLE LLC
    Inventors: Daniel Christopher Duckworth, Alexey Dosovitskiy, Ricardo Martin Brualla, Jonathan Tilton Barron, Noha Waheed Ahmed Radwan, Seyed Mohammad Mehdi Sajjadi
  • Publication number: 20220237834
    Abstract: Provided are systems and methods for synthesizing novel views of complex scenes (e.g., outdoor scenes). In some implementations, the systems and methods can include or use machine-learned models that are capable of learning from unstructured and/or unconstrained collections of imagery such as, for example, “in the wild” photographs. In particular, example implementations of the present disclosure can learn a volumetric scene density and radiance represented by a machine-learned model such as one or more multilayer perceptrons (MLPs).
    Type: Application
    Filed: April 18, 2022
    Publication date: July 28, 2022
    Inventors: Daniel Christopher Duckworth, Alexey Dosovitskiy, Ricardo Martin Brualla, Jonathan Tilton Barron, Noha Waheed Ahmed Radwan, Seyed Mohammad Mehdi Sajjadi
  • Patent number: 11308659
    Abstract: Provided are systems and methods for synthesizing novel views of complex scenes (e.g., outdoor scenes). In some implementations, the systems and methods can include or use machine-learned models that are capable of learning from unstructured and/or unconstrained collections of imagery such as, for example, “in the wild” photographs. In particular, example implementations of the present disclosure can learn a volumetric scene density and radiance represented by a machine-learned model such as one or more multilayer perceptrons (MLPs).
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: April 19, 2022
    Assignee: GOOGLE LLC
    Inventors: Daniel Christopher Duckworth, Seyed Mohammad Mehdi Sajjadi, Jonathan Tilton Barron, Noha Radwan, Alexey Dosovitskiy, Ricardo Martin-Brualla
  • Publication number: 20220036602
    Abstract: Provided are systems and methods for synthesizing novel views of complex scenes (e.g., outdoor scenes). In some implementations, the systems and methods can include or use machine-learned models that are capable of learning from unstructured and/or unconstrained collections of imagery such as, for example, “in the wild” photographs. In particular, example implementations of the present disclosure can learn a volumetric scene density and radiance represented by a machine-learned model such as one or more multilayer perceptrons (MLPs).
    Type: Application
    Filed: July 30, 2021
    Publication date: February 3, 2022
    Inventors: Daniel Christopher Duckworth, Seyed Mohammad Mehdi Sajjadi, Jonathan Tilton Barron, Noha Waheed Ahmed Radwan, Alexey Dosovitskiy, Ricardo Martin-Brualla
  • Patent number: 10783611
    Abstract: The present disclosure provides systems and methods to increase resolution of imagery. In one example embodiment, a computer-implemented method includes obtaining a current low-resolution image frame. The method includes obtaining a previous estimated high-resolution image frame, the previous estimated high-resolution frame being a high-resolution estimate of a previous low-resolution image frame. The method includes warping the previous estimated high-resolution image frame based on the current low-resolution image frame. The method includes inputting the warped previous estimated high-resolution image frame and the current low-resolution image frame into a machine-learned frame estimation model. The method includes receiving a current estimated high-resolution image frame as an output of the machine-learned frame estimation model, the current estimated high-resolution image frame being a high-resolution estimate of the current low-resolution image frame.
    Type: Grant
    Filed: January 2, 2018
    Date of Patent: September 22, 2020
    Assignee: Google LLC
    Inventors: Raviteja Vemulapalli, Matthew Brown, Seyed Mohammad Mehdi Sajjadi
  • Publication number: 20190206026
    Abstract: The present disclosure provides systems and methods to increase resolution of imagery. In one example embodiment, a computer-implemented method includes obtaining a current low-resolution image frame. The method includes obtaining a previous estimated high-resolution image frame, the previous estimated high-resolution frame being a high-resolution estimate of a previous low-resolution image frame. The method includes warping the previous estimated high-resolution image frame based on the current low-resolution image frame. The method includes inputting the warped previous estimated high-resolution image frame and the current low-resolution image frame into a machine-learned frame estimation model. The method includes receiving a current estimated high-resolution image frame as an output of the machine-learned frame estimation model, the current estimated high-resolution image frame being a high-resolution estimate of the current low-resolution image frame.
    Type: Application
    Filed: January 2, 2018
    Publication date: July 4, 2019
    Inventors: Raviteja Vemulapalli, Matthew Brown, Seyed Mohammad Mehdi Sajjadi