Patents by Inventor Oisin MAC AODHA

Oisin MAC AODHA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230196690
    Abstract: A scene reconstruction model is disclosed that outputs a heightfield for a series of input images. The model, for each input image, predicts a depth map and extracts a feature map. The model builds a 3D model utilizing the predicted depth maps and camera poses for the images. The model raycasts the 3D model to determine a raw heightfield for the scene. The model utilizes the raw heightfield to sample features from the feature maps corresponding to positions on the heightfield. The model aggregates the sampled features into an aggregate feature map. The model regresses a refined heightfield based on the aggregate feature map. The model determines the final heightfield based on a combination of the raw heightfield and the refined heightfield. With the final heightfield, a client device may generate virtual content augmented on real-world images captured by the client device.
    Type: Application
    Filed: December 14, 2022
    Publication date: June 22, 2023
    Inventors: James Watson, Sara Alexandra Gomes Vicente, Oisin Mac Aodha, Clément Godard, Gabriel J. Brostow, Michael David Firman
  • Publication number: 20220327730
    Abstract: A method for training a first neural network to detect the viewpoint of an object visible on an image and belonging to a given category of object when this image is inputted to the first neural network, including: providing a dataset of pairs of images under different viewpoints, providing a second neural network configured to be able to deliver appearance information of an object, providing a third neural network configured to be able to deliver a synthetic image of an object of the category using appearance information and a viewpoint, jointly training the first neural network, the second neural network, and the third neural network.
    Type: Application
    Filed: April 11, 2022
    Publication date: October 13, 2022
    Inventors: Sven Meier, Octave Mariotti, Hakan Bilen, Oisin Mac Aodha
  • Publication number: 20220189049
    Abstract: A multi-frame depth estimation model is disclosed. The model is trained and configured to receive an input image and an additional image. The model outputs a depth map for the input image based on the input image and the additional image. The model may extract a feature map for the input image and an additional feature map for the additional image. For each of a plurality of depth planes, the model warps the feature map to the depth plane based on relative pose between the input image and the additional image, the depth plane, and camera intrinsics. The model builds a cost volume from the warped feature maps for the plurality of depth planes. A decoder of the model inputs the cost volume and the input image to output the depth map.
    Type: Application
    Filed: December 8, 2021
    Publication date: June 16, 2022
    Inventors: James Watson, Oisin Mac Aodha, Victor Adrian Prisacariu, Gabriel J. Brostow, Michael David Firman
  • Publication number: 20210352261
    Abstract: A computer system generates stereo image data from monocular images. The system generates depth maps for single images using a monocular depth estimation method. The system converts the depth maps to disparity maps and uses the disparity maps to generate additional images forming stereo pairs with the monocular images. The stereo pairs can be used to form a stereo image training data set for training various models, including depth estimation models or stereo matching models.
    Type: Application
    Filed: May 11, 2021
    Publication date: November 11, 2021
    Inventors: James Watson, Oisin Mac Aodha, Daniyar Turmukhambetov, Gabriel J. Brostow, Michael David Firman
  • Publication number: 20210314550
    Abstract: A method for training a depth estimation model and methods for use thereof are described. Images are acquired and input into a depth model to extract a depth map for each of the plurality of images based on parameters of the depth model. The method includes inputting the images into a pose decoder to extract a pose for each image. The method includes generating a plurality of synthetic frames based on the depth map and the pose for each image. The method includes calculating a loss value with an input scale occlusion and motion aware loss function based on a comparison of the synthetic frames and the images. The method includes adjusting the plurality of parameters of the depth model based on the loss value. The trained model can receive an image of a scene and generate a depth map of the scene according to the image.
    Type: Application
    Filed: June 22, 2021
    Publication date: October 7, 2021
    Inventors: Clément Godard, Oisin Mac Aodha, Michael Firman, Gabriel J. Brostow
  • Patent number: 11082681
    Abstract: A method for training a depth estimation model and methods for use thereof are described. Images are acquired and input into a depth model to extract a depth map for each of the plurality of images based on parameters of the depth model. The method includes inputting the images into a pose decoder to extract a pose for each image. The method includes generating a plurality of synthetic frames based on the depth map and the pose for each image. The method includes calculating a loss value with an input scale occlusion and motion aware loss function based on a comparison of the synthetic frames and the images. The method includes adjusting the plurality of parameters of the depth model based on the loss value. The trained model can receive an image of a scene and generate a depth map of the scene according to the image.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: August 3, 2021
    Assignee: Niantic, Inc.
    Inventors: Clément Godard, Oisin Mac Aodha, Michael Firman, Gabriel J. Brostow
  • Publication number: 20190356905
    Abstract: A method for training a depth estimation model and methods for use thereof are described. Images are acquired and input into a depth model to extract a depth map for each of the plurality of images based on parameters of the depth model. The method includes inputting the images into a pose decoder to extract a pose for each image. The method includes generating a plurality of synthetic frames based on the depth map and the pose for each image. The method includes calculating a loss value with an input scale occlusion and motion aware loss function based on a comparison of the synthetic frames and the images. The method includes adjusting the plurality of parameters of the depth model based on the loss value. The trained model can receive an image of a scene and generate a depth map of the scene according to the image.
    Type: Application
    Filed: May 16, 2019
    Publication date: November 21, 2019
    Inventors: Clément Godard, Oisin Mac Aodha, Michael Firman, Gabriel J. Brostow
  • Publication number: 20190213481
    Abstract: Systems and methods are described for predicting depth from colour image data using a statistical model such as a convolutional neural network (CNN), The model is trained on binocular stereo pairs of images, enabling depth data to be predicted from a single source colour image. The model is trained to predict, for each image of an input binocular stereo pair, corresponding disparity values that enable reconstruction of another image when applied, to the image. The model is updated based on a cost function that enforces consistency between the predicted disparity values for each image in the stereo pair.
    Type: Application
    Filed: September 12, 2017
    Publication date: July 11, 2019
    Inventors: Clément GODARD, Oisin MAC AODHA, Gabriel BROSTOW