Patents by Inventor Matheus Gadelha

Matheus Gadelha has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240161320
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.
    Type: Application
    Filed: November 15, 2022
    Publication date: May 16, 2024
    Inventors: Matheus Gadelha, Radomir Mech
  • Publication number: 20240161405
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.
    Type: Application
    Filed: November 15, 2022
    Publication date: May 16, 2024
    Inventors: Radomir Mech, Nathan Carr, Matheus Gadelha
  • Publication number: 20240161366
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.
    Type: Application
    Filed: November 15, 2022
    Publication date: May 16, 2024
    Inventors: Radomir Mech, Nathan Carr, Matheus Gadelha
  • Publication number: 20240161406
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.
    Type: Application
    Filed: November 15, 2022
    Publication date: May 16, 2024
    Inventors: Radomir Mech, Nathan Carr, Matheus Gadelha
  • Publication number: 20240144586
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images.
    Type: Application
    Filed: April 20, 2023
    Publication date: May 2, 2024
    Inventors: Yannick Hold-Geoffroy, Vojtech Krs, Radomir Mech, Nathan Carr, Matheus Gadelha
  • Publication number: 20240135612
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images.
    Type: Application
    Filed: April 20, 2023
    Publication date: April 25, 2024
    Inventors: Yannick Hold-Geoffroy, Vojtech Krs, Radomir Mech, Nathan Carr, Matheus Gadelha
  • Patent number: 11900558
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that tune a 3D-object-reconstruction-machine-learning model to reconstruct 3D models of objects from real images using real images as training data. For instance, the disclosed systems can determine a depth map for a real two-dimensional (2D) image and then reconstruct a 3D model of a digital object in the real 2D image based on the depth map. By using a depth map for a real 2D image, the disclosed systems can generate reconstructed 3D models that better conform to the shape of digital objects in real images than existing systems and use such reconstructed 3D models to generate more realistic looking visual effects (e.g., shadows, relighting).
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: February 13, 2024
    Assignee: Adobe Inc.
    Inventors: Marissa Ramirez de Chanlatte, Radomir Mech, Matheus Gadelha, Thibault Groueix
  • Publication number: 20230274040
    Abstract: Certain aspects and features of this disclosure relate to modeling shapes using differentiable, signed distance functions. 3D modeling software can edit a 3D model represented using the differentiable, signed distance functions while displaying the model in a manner that is computing resource efficient and fast. Further, such 3D modeling software can automatically create such an editable 3D model from a reference representation that can be obtained in various ways and stored in a variety of formats. For example, a real-world object can be scanned using LiDAR and a reference representation can be produced from the LiDAR data. Candidate procedural models from a library of curated procedural models are optimized to obtain the best procedural model for editing. A selected procedural model provides an editable, reconstructed shape based on the reference representation of the object.
    Type: Application
    Filed: February 28, 2022
    Publication date: August 31, 2023
    Inventors: Adrien Kaiser, Vojtech Krs, Thibault Groueix, Tamy Boubekeur, Pierre Gueth, Mathieu Gaillard, Matheus Gadelha
  • Publication number: 20230147722
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that tune a 3D-object-reconstruction-machine-learning model to reconstruct 3D models of objects from real images using real images as training data. For instance, the disclosed systems can determine a depth map for a real two-dimensional (2D) image and then reconstruct a 3D model of a digital object in the real 2D image based on the depth map. By using a depth map for a real 2D image, the disclosed systems can generate reconstructed 3D models that better conform to the shape of digital objects in real images than existing systems and use such reconstructed 3D models to generate more realistic looking visual effects (e.g., shadows, relighting).
    Type: Application
    Filed: November 5, 2021
    Publication date: May 11, 2023
    Inventors: Marissa Ramirez de Chanlatte, Radomir Mech, Matheus Gadelha, Thibault Groueix