Patents by Inventor Sara Alexandra Gomes Vicente

Sara Alexandra Gomes Vicente has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250054255
    Abstract: A computer-implemented method is disclosed for generating scene reconstructions from image data. The method includes: receiving image data of a scene captured by a camera; inputting the image data of the scene into a scene reconstruction model; receiving, from the scene reconstruction model, a final spatial model of the scene, wherein the scene reconstruction model generates the final spatial model by: predicting a depth map for each image of the image data, extracting a feature map for each image of the image data, generating a first spatial model based on the predicted depth maps of the images, generating a second spatial model based on the extracted feature maps of the images, and determining the final spatial model by combining the first spatial model and the second spatial model; and providing functionality on a computing device related to the scene and based on the final spatial model.
    Type: Application
    Filed: October 30, 2024
    Publication date: February 13, 2025
    Inventors: James Watson, Sara Alexandra Gomes Vicente, Oisin Mac Aodha, Clément Godard, Gabriel J. Brostow, Michael David Firman
  • Patent number: 12159358
    Abstract: A scene reconstruction model is disclosed that outputs a heightfield for a series of input images. The model, for each input image, predicts a depth map and extracts a feature map. The model builds a 3D model utilizing the predicted depth maps and camera poses for the images. The model raycasts the 3D model to determine a raw heightfield for the scene. The model utilizes the raw heightfield to sample features from the feature maps corresponding to positions on the heightfield. The model aggregates the sampled features into an aggregate feature map. The model regresses a refined heightfield based on the aggregate feature map. The model determines the final heightfield based on a combination of the raw heightfield and the refined heightfield. With the final heightfield, a client device may generate virtual content augmented on real-world images captured by the client device.
    Type: Grant
    Filed: December 14, 2022
    Date of Patent: December 3, 2024
    Assignee: Niantic, Inc.
    Inventors: James Watson, Sara Alexandra Gomes Vicente, Oisin MacAodha, Clément Godard, Gabriel J. Brostow, Michael David Firman
  • Publication number: 20240185478
    Abstract: A system generates augmented reality content by generating an occlusion mask via implicit depth estimation. The system receives input image(s) of a real-world environment captured by a camera assembly. The system generates a feature map from the input image(s), wherein the feature map comprises abstract features representing depth of object(s) in the real-world environment. The system generates an occlusion mask from the feature map and a depth map for the virtual object. The depth map for the virtual object indicates a depth of each pixel of the virtual object. The occlusion mask indicates pixel(s) of the virtual object that are occluded by an object in the real-world environment. The system generates the composite image based on a first input image at a current timestamp, the virtual object, and the occlusion mask. The composite image may then displayed on an electronic display.
    Type: Application
    Filed: December 5, 2023
    Publication date: June 6, 2024
    Inventors: James Watson, Mohamed Sayed, Zawar Imam Qureshi, Gabriel J. Brostow, Sara Alexandra Gomes Vicente, Oisin Mac Aodha, Michael David Firman
  • Publication number: 20230410349
    Abstract: A method or a system for map-free visual relocalization of a device. The system obtains a reference image of an environment captured by a reference pose. The system also receives a query image taken by a camera of the device. The system determines a relative pose of the camera of the device relative to the reference camera based in part on the reference image and the query image. The system determines a pose of the query camera in the environment based on the reference pose and the relative pose.
    Type: Application
    Filed: June 20, 2023
    Publication date: December 21, 2023
    Inventors: Eduardo Henrique Arnold, Jamie Michael Wynn, Guillermo Garcia-Hernando, Sara Alexandra Gomes Vicente, Aron Monszpart, Victor Adrian Prisacariu, Daniyar Turmukhambetov, Eric Brachmann, Axel Barroso-Laguna
  • Publication number: 20230196690
    Abstract: A scene reconstruction model is disclosed that outputs a heightfield for a series of input images. The model, for each input image, predicts a depth map and extracts a feature map. The model builds a 3D model utilizing the predicted depth maps and camera poses for the images. The model raycasts the 3D model to determine a raw heightfield for the scene. The model utilizes the raw heightfield to sample features from the feature maps corresponding to positions on the heightfield. The model aggregates the sampled features into an aggregate feature map. The model regresses a refined heightfield based on the aggregate feature map. The model determines the final heightfield based on a combination of the raw heightfield and the refined heightfield. With the final heightfield, a client device may generate virtual content augmented on real-world images captured by the client device.
    Type: Application
    Filed: December 14, 2022
    Publication date: June 22, 2023
    Inventors: James Watson, Sara Alexandra Gomes Vicente, Oisin Mac Aodha, Clément Godard, Gabriel J. Brostow, Michael David Firman
  • Publication number: 20210042975
    Abstract: This disclosure relates to methods of transforming an image. Disclosed herein is a method for manipulating an image using at least one image control handle. The image comprises pixels, and at least one set of constrained pixels defines a constrained region having a transformation constraint. The method comprises transforming pixels of the image based on input received from the manipulation of the at least one image control handle. The transformation constraint applies to the pixels inside the constrained region, and the degree to which the transformation constraint applies to any respective pixel outside the constrained region is conditional on the distance between the respective pixel and the constrained region.
    Type: Application
    Filed: September 7, 2018
    Publication date: February 11, 2021
    Inventors: Ivor James Alexander Simpson, Sara Alexandra Gomes Vicente, Simon Jeremy Damion Prince