Patents by Inventor Matheus Gadelha

Matheus Gadelha has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12367626
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.
    Type: Grant
    Filed: November 15, 2022
    Date of Patent: July 22, 2025
    Assignee: Adobe Inc.
    Inventors: Radomir Mech, Nathan Carr, Matheus Gadelha
  • Publication number: 20250225733
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.
    Type: Application
    Filed: February 27, 2025
    Publication date: July 10, 2025
    Inventors: Radomir Mech, Nathan Carr, Matheus Gadelha
  • Patent number: 12347124
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.
    Type: Grant
    Filed: November 15, 2022
    Date of Patent: July 1, 2025
    Assignee: Adobe Inc.
    Inventors: Matheus Gadelha, Radomir Mech
  • Publication number: 20250166307
    Abstract: A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining a condition input and an adherence parameter, where the condition input indicates an image attribute and the adherence parameter indicates a level of the condition input, generating an intermediate output based on the condition input and the adherence parameter, where the intermediate output includes the image attribute, and generating a synthetic image based on the intermediate output, where the synthetic image includes the image attribute based on the level indicated by the adherence parameter.
    Type: Application
    Filed: November 14, 2024
    Publication date: May 22, 2025
    Inventors: Matheus Gadelha, Kevin James Blackburn-Matzen, Radomir Mech
  • Patent number: 12277652
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.
    Type: Grant
    Filed: November 15, 2022
    Date of Patent: April 15, 2025
    Assignee: Adobe Inc.
    Inventors: Radomir Mech, Nathan Carr, Matheus Gadelha
  • Publication number: 20250061660
    Abstract: Systems and methods for extracting 3D shapes from unstructured and unannotated datasets are described. Embodiments are configured to obtain a first image and a second image, where the first image depicts an object and the second image includes a corresponding object of a same object category as the object. Embodiments are further configured to generate, using an image encoder, image features for portions of the first image and for portions of the second image; identify a keypoint correspondence between a first keypoint in the first image and a second keypoint in the second image by clustering the image features corresponding to the portions of the first image and the portions of the second image; and generate, using an occupancy network, a 3D model of the object based on the keypoint correspondence.
    Type: Application
    Filed: August 18, 2023
    Publication date: February 20, 2025
    Inventors: Ta-Ying Cheng, Matheus Gadelha, Soren Pirk, Radomir Mech, Thibault Groueix
  • Publication number: 20250061650
    Abstract: An image processing system is configured to receive a three-dimensional (3D) model and a text prompt that describes a scene corresponding to the 3D model. The system may then generate a depth map of the 3D model and generate an output image based on the depth map and the text prompt. The output image may depicts a view of the scene that includes textures described by the text prompt. The output image may be generated using an image generation model.
    Type: Application
    Filed: August 17, 2023
    Publication date: February 20, 2025
    Inventors: Matheus Gadelha, Tomasz Opasinski, Kevin James Blackburn-Matzen, Mathieu Kevin Pascal Gaillard, Giorgio Gori, Radomir Mech
  • Publication number: 20240161366
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.
    Type: Application
    Filed: November 15, 2022
    Publication date: May 16, 2024
    Inventors: Radomir Mech, Nathan Carr, Matheus Gadelha
  • Publication number: 20240161405
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.
    Type: Application
    Filed: November 15, 2022
    Publication date: May 16, 2024
    Inventors: Radomir Mech, Nathan Carr, Matheus Gadelha
  • Publication number: 20240161320
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.
    Type: Application
    Filed: November 15, 2022
    Publication date: May 16, 2024
    Inventors: Matheus Gadelha, Radomir Mech
  • Publication number: 20240161406
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.
    Type: Application
    Filed: November 15, 2022
    Publication date: May 16, 2024
    Inventors: Radomir Mech, Nathan Carr, Matheus Gadelha
  • Publication number: 20240144586
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images.
    Type: Application
    Filed: April 20, 2023
    Publication date: May 2, 2024
    Inventors: Yannick Hold-Geoffroy, Vojtech Krs, Radomir Mech, Nathan Carr, Matheus Gadelha
  • Publication number: 20240135612
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images.
    Type: Application
    Filed: April 20, 2023
    Publication date: April 25, 2024
    Inventors: Yannick Hold-Geoffroy, Vojtech Krs, Radomir Mech, Nathan Carr, Matheus Gadelha
  • Patent number: 11900558
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that tune a 3D-object-reconstruction-machine-learning model to reconstruct 3D models of objects from real images using real images as training data. For instance, the disclosed systems can determine a depth map for a real two-dimensional (2D) image and then reconstruct a 3D model of a digital object in the real 2D image based on the depth map. By using a depth map for a real 2D image, the disclosed systems can generate reconstructed 3D models that better conform to the shape of digital objects in real images than existing systems and use such reconstructed 3D models to generate more realistic looking visual effects (e.g., shadows, relighting).
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: February 13, 2024
    Assignee: Adobe Inc.
    Inventors: Marissa Ramirez de Chanlatte, Radomir Mech, Matheus Gadelha, Thibault Groueix
  • Publication number: 20230274040
    Abstract: Certain aspects and features of this disclosure relate to modeling shapes using differentiable, signed distance functions. 3D modeling software can edit a 3D model represented using the differentiable, signed distance functions while displaying the model in a manner that is computing resource efficient and fast. Further, such 3D modeling software can automatically create such an editable 3D model from a reference representation that can be obtained in various ways and stored in a variety of formats. For example, a real-world object can be scanned using LiDAR and a reference representation can be produced from the LiDAR data. Candidate procedural models from a library of curated procedural models are optimized to obtain the best procedural model for editing. A selected procedural model provides an editable, reconstructed shape based on the reference representation of the object.
    Type: Application
    Filed: February 28, 2022
    Publication date: August 31, 2023
    Inventors: Adrien Kaiser, Vojtech Krs, Thibault Groueix, Tamy Boubekeur, Pierre Gueth, Mathieu Gaillard, Matheus Gadelha
  • Publication number: 20230147722
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that tune a 3D-object-reconstruction-machine-learning model to reconstruct 3D models of objects from real images using real images as training data. For instance, the disclosed systems can determine a depth map for a real two-dimensional (2D) image and then reconstruct a 3D model of a digital object in the real 2D image based on the depth map. By using a depth map for a real 2D image, the disclosed systems can generate reconstructed 3D models that better conform to the shape of digital objects in real images than existing systems and use such reconstructed 3D models to generate more realistic looking visual effects (e.g., shadows, relighting).
    Type: Application
    Filed: November 5, 2021
    Publication date: May 11, 2023
    Inventors: Marissa Ramirez de Chanlatte, Radomir Mech, Matheus Gadelha, Thibault Groueix