Patents by Inventor Clément GODARD
Clément GODARD has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240340400Abstract: A method for training a depth estimation model and methods for use thereof are described. Images are acquired and input into a depth model to extract a depth map for each of the plurality of images based on parameters of the depth model. The method includes inputting the images into a pose decoder to extract a pose for each image. The method includes generating a plurality of synthetic frames based on the depth map and the pose for each image. The method includes calculating a loss value with an input scale occlusion and motion aware loss function based on a comparison of the synthetic frames and the images. The method includes adjusting the plurality of parameters of the depth model based on the loss value. The trained model can receive an image of a scene and generate a depth map of the scene according to the image.Type: ApplicationFiled: April 15, 2024Publication date: October 10, 2024Inventors: Clément Godard, Oisin Mac Aodha, Michael Firman, Gabriel J. Brostow
-
Patent number: 11991342Abstract: A method for training a depth estimation model and methods for use thereof are described. Images are acquired and input into a depth model to extract a depth map for each of the plurality of images based on parameters of the depth model. The method includes inputting the images into a pose decoder to extract a pose for each image. The method includes generating a plurality of synthetic frames based on the depth map and the pose for each image. The method includes calculating a loss value with an input scale occlusion and motion aware loss function based on a comparison of the synthetic frames and the images. The method includes adjusting the plurality of parameters of the depth model based on the loss value. The trained model can receive an image of a scene and generate a depth map of the scene according to the image.Type: GrantFiled: June 22, 2021Date of Patent: May 21, 2024Assignee: NIANTIC, INC.Inventors: Clément Godard, Oisin MacAodha, Michael Firman, Gabriel J. Brostow
-
Publication number: 20230421985Abstract: A reference image and recorded sound of an environment of a client device are obtained. The recorded sound may be captured by a microphone of the client device in a period of time after generation of a localization sound by the client device. The location of the client device in the environment may be determined using the reference image and the recorded sound.Type: ApplicationFiled: June 22, 2023Publication date: December 28, 2023Inventors: Karren Dai Yang, Michael David Firman, Eric Brachmann, Clément Godard
-
Publication number: 20230360241Abstract: A depth estimation module may receive a reference image and a set of source images of an environment. The depth module may receive image features of the reference image and the set of source images. The depth module may generate a 4D feature volume that includes the image features and metadata associated with the reference image and set of source images. The image features and the metadata may be arranged in the feature volume based on relative pose distances between the reference image and the set of source images. The depth module may reduce the 4D feature volume to generate a 3D cost volume. The depth module may apply a depth estimation model to the 3D cost volume and data based on the reference image to generate a two dimensional (2D) depth map for the reference image.Type: ApplicationFiled: May 5, 2023Publication date: November 9, 2023Inventors: Mohamed Sayed, John Gibson, James Watson, Victor Adrian Prisacariu, Michael David Firman, Clément Godard
-
Publication number: 20230239575Abstract: In some examples, a computing device receives, from an unmanned aerial vehicle (UAV), a first image from a first camera on the UAV and a plurality of second images from a plurality of second cameras on the UAV. The plurality of second cameras may be positioned on the UAV for providing a plurality of different fields of view in a plurality of different directions around the UAV. Further, the first camera has a longer focal length than the second cameras. The computing device presents, on a display, a composite image including at least a portion of the first image within a merged image generated from the plurality of second images. The presented composite image enables a user to at least one of: zoom out from the at least one first image to the merged image, or zoom in from the merged image to the at least one first image.Type: ApplicationFiled: March 17, 2023Publication date: July 27, 2023Inventors: Peter Benjamin HENRY, Hayk MARTIROSYAN, Abraham Galton BACHRACH, Clement GODARD, Adam Parker BRY, Ryan David KENNEDY
-
Publication number: 20230196690Abstract: A scene reconstruction model is disclosed that outputs a heightfield for a series of input images. The model, for each input image, predicts a depth map and extracts a feature map. The model builds a 3D model utilizing the predicted depth maps and camera poses for the images. The model raycasts the 3D model to determine a raw heightfield for the scene. The model utilizes the raw heightfield to sample features from the feature maps corresponding to positions on the heightfield. The model aggregates the sampled features into an aggregate feature map. The model regresses a refined heightfield based on the aggregate feature map. The model determines the final heightfield based on a combination of the raw heightfield and the refined heightfield. With the final heightfield, a client device may generate virtual content augmented on real-world images captured by the client device.Type: ApplicationFiled: December 14, 2022Publication date: June 22, 2023Inventors: James Watson, Sara Alexandra Gomes Vicente, Oisin Mac Aodha, Clément Godard, Gabriel J. Brostow, Michael David Firman
-
Patent number: 11611700Abstract: In some examples, an unmanned aerial vehicle (UAV) may control a position of a first camera to cause the first camera to capture a first image of a target. The UAV may receive a plurality of second images from a plurality of second cameras, the plurality of second cameras positioned on the UAV for providing a plurality of different fields of view in a plurality of different directions around the UAV, the first camera having a longer focal length than the second cameras. The UAV may combine at least some of the plurality of second images to generate a composite image corresponding to the first image and having a wider-angle field of view than the first image. The UAV may send the first image and the composite image to a computing device.Type: GrantFiled: July 12, 2021Date of Patent: March 21, 2023Assignee: SKYDIO, INC.Inventors: Peter Benjamin Henry, Hayk Martirosyan, Abraham Galton Bachrach, Clement Godard, Adam Parker Bry, Ryan David Kennedy
-
Publication number: 20220014675Abstract: In some examples, an unmanned aerial vehicle (UAV) may control a position of a first camera to cause the first camera to capture a first image of a target. The UAV may receive a plurality of second images from a plurality of second cameras, the plurality of second cameras positioned on the UAV for providing a plurality of different fields of view in a plurality of different directions around the UAV, the first camera having a longer focal length than the second cameras. The UAV may combine at least some of the plurality of second images to generate a composite image corresponding to the first image and having a wider-angle field of view than the first image. The UAV may send the first image and the composite image to a computing device.Type: ApplicationFiled: July 12, 2021Publication date: January 13, 2022Inventors: Peter Benjamin HENRY, Hayk MARTIROSYAN, Abraham Galton BACHRACH, Clement GODARD, Adam Parker BRY, Ryan David KENNEDY
-
Publication number: 20210314550Abstract: A method for training a depth estimation model and methods for use thereof are described. Images are acquired and input into a depth model to extract a depth map for each of the plurality of images based on parameters of the depth model. The method includes inputting the images into a pose decoder to extract a pose for each image. The method includes generating a plurality of synthetic frames based on the depth map and the pose for each image. The method includes calculating a loss value with an input scale occlusion and motion aware loss function based on a comparison of the synthetic frames and the images. The method includes adjusting the plurality of parameters of the depth model based on the loss value. The trained model can receive an image of a scene and generate a depth map of the scene according to the image.Type: ApplicationFiled: June 22, 2021Publication date: October 7, 2021Inventors: Clément Godard, Oisin Mac Aodha, Michael Firman, Gabriel J. Brostow
-
Patent number: 11100401Abstract: Systems and methods are described for predicting depth from colour image data using a statistical model such as a convolutional neural network (CNN), The model is trained on binocular stereo pairs of images, enabling depth data to be predicted from a single source colour image. The model is trained to predict, for each image of an input binocular stereo pair, corresponding disparity values that enable reconstruction of another image when applied, to the image. The model is updated based on a cost function that enforces consistency between the predicted disparity values for each image in the stereo pair.Type: GrantFiled: September 12, 2017Date of Patent: August 24, 2021Assignee: Niantic, Inc.Inventors: Clément Godard, Oisin MacAodha, Gabriel Brostow
-
Patent number: 11082681Abstract: A method for training a depth estimation model and methods for use thereof are described. Images are acquired and input into a depth model to extract a depth map for each of the plurality of images based on parameters of the depth model. The method includes inputting the images into a pose decoder to extract a pose for each image. The method includes generating a plurality of synthetic frames based on the depth map and the pose for each image. The method includes calculating a loss value with an input scale occlusion and motion aware loss function based on a comparison of the synthetic frames and the images. The method includes adjusting the plurality of parameters of the depth model based on the loss value. The trained model can receive an image of a scene and generate a depth map of the scene according to the image.Type: GrantFiled: May 16, 2019Date of Patent: August 3, 2021Assignee: Niantic, Inc.Inventors: Clément Godard, Oisin Mac Aodha, Michael Firman, Gabriel J. Brostow
-
Publication number: 20190356905Abstract: A method for training a depth estimation model and methods for use thereof are described. Images are acquired and input into a depth model to extract a depth map for each of the plurality of images based on parameters of the depth model. The method includes inputting the images into a pose decoder to extract a pose for each image. The method includes generating a plurality of synthetic frames based on the depth map and the pose for each image. The method includes calculating a loss value with an input scale occlusion and motion aware loss function based on a comparison of the synthetic frames and the images. The method includes adjusting the plurality of parameters of the depth model based on the loss value. The trained model can receive an image of a scene and generate a depth map of the scene according to the image.Type: ApplicationFiled: May 16, 2019Publication date: November 21, 2019Inventors: Clément Godard, Oisin Mac Aodha, Michael Firman, Gabriel J. Brostow
-
Publication number: 20190213481Abstract: Systems and methods are described for predicting depth from colour image data using a statistical model such as a convolutional neural network (CNN), The model is trained on binocular stereo pairs of images, enabling depth data to be predicted from a single source colour image. The model is trained to predict, for each image of an input binocular stereo pair, corresponding disparity values that enable reconstruction of another image when applied, to the image. The model is updated based on a cost function that enforces consistency between the predicted disparity values for each image in the stereo pair.Type: ApplicationFiled: September 12, 2017Publication date: July 11, 2019Inventors: Clément GODARD, Oisin MAC AODHA, Gabriel BROSTOW