Patents by Inventor Emiliano Gambaretto

Emiliano Gambaretto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11694416
    Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: July 4, 2023
    Assignee: Adobe, Inc.
    Inventors: Duygu Ceylan Aksit, Vladimir Kim, Siddhartha Chaudhuri, Radomir Mech, Noam Aigerman, Kevin Wampler, Jonathan Eisenmann, Giorgio Gori, Emiliano Gambaretto
  • Patent number: 11443412
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: September 13, 2022
    Assignee: ADOBE INC.
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Patent number: 11170558
    Abstract: A system and method for automatic rigging of three dimensional characters for facial animation provide a rigged mesh for an original three dimensional mesh. A representative mesh is generated from the original mesh. Segments, key points, a bone set, and skinning weights are then determined for the representative mesh. The Skinning weights and bone set are placed in the original mesh to generate the rigged mesh.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: November 9, 2021
    Assignee: ADOBE INC.
    Inventors: Stefano Corazza, Emiliano Gambaretto, Prasanna Vasudevan
  • Publication number: 20210256775
    Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.
    Type: Application
    Filed: March 22, 2021
    Publication date: August 19, 2021
    Inventors: Duygu Ceylan Aksit, Vladimir Kim, Siddhartha Chaudhuri, Radomir Mech, Noam Aigerman, Kevin Wampler, Jonathan Eisenmann, Giorgio Gori, Emiliano Gambaretto
  • Patent number: 10979640
    Abstract: The present disclosure is directed toward systems and methods for predicting lighting conditions. In particular, the systems and methods described herein analyze a single low-dynamic range digital image to estimate a set of high-dynamic range lighting conditions associated with the single low-dynamic range lighting digital image. Additionally, the systems and methods described herein train a convolutional neural network to extrapolate lighting conditions from a digital image. The systems and methods also augment low-dynamic range information from the single low-dynamic range digital image by using a sky model algorithm to predict high-dynamic range lighting conditions.
    Type: Grant
    Filed: February 12, 2020
    Date of Patent: April 13, 2021
    Assignee: ADOBE INC.
    Inventors: Yannick Hold-Geoffroy, Sunil S. Hadap, Kalyan Krishna Sunkavalli, Emiliano Gambaretto
  • Patent number: 10964060
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: March 30, 2021
    Assignee: ADOBE INC.
    Inventors: Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Matthew David Fisher, Jonathan Eisenmann, Emiliano Gambaretto
  • Patent number: 10957117
    Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: March 23, 2021
    Assignee: Adobe Inc.
    Inventors: Duygu Ceylan Aksit, Vladimir Kim, Siddhartha Chaudhuri, Radomir Mech, Noam Aigerman, Kevin Wampler, Jonathan Eisenmann, Giorgio Gori, Emiliano Gambaretto
  • Patent number: 10936909
    Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate lighting parameters for input images where the input images are synthetic and real low-dynamic range images. Such a neural network system can be trained using differences between a simple scene rendered using the estimated lighting parameters and the same simple scene rendered using known ground-truth lighting parameters. Such a neural network system can also be trained such that the synthetic and real low-dynamic range images are mapped in roughly the same distribution. Such a trained neural network system can be used to input a low-dynamic range image determine high-dynamic range lighting parameters.
    Type: Grant
    Filed: November 12, 2018
    Date of Patent: March 2, 2021
    Assignee: Adobe Inc.
    Inventors: Kalyan K. Sunkavalli, Sunil Hadap, Jonathan Eisenmann, Jinsong Zhang, Emiliano Gambaretto
  • Publication number: 20200334892
    Abstract: A system and method for automatic rigging of three dimensional characters for facial animation provide a rigged mesh for an original three dimensional mesh. A representative mesh is generated from the original mesh. Segments, key points, a bone set, and skinning weights are then determined for the representative mesh. The Skinning weights and bone set are placed in the original mesh to generate the rigged mesh.
    Type: Application
    Filed: July 2, 2020
    Publication date: October 22, 2020
    Inventors: Stefano Corazza, Emiliano Gambaretto, Prasanna Vasudevan
  • Patent number: 10748325
    Abstract: A system and method for automatic rigging of three dimensional characters for facial animation provide a rigged mesh for an original three dimensional mesh. A representative mesh is generated from the original mesh. Segments, key points, a bone set, and skinning weights are then determined for the representative mesh. The Skinning weights and bone set are placed in the original mesh to generate the rigged mesh.
    Type: Grant
    Filed: November 19, 2012
    Date of Patent: August 18, 2020
    Assignee: ADOBE INC.
    Inventors: Stefano Corazza, Emiliano Gambaretto, Prasanna Vasudevan
  • Publication number: 20200186714
    Abstract: The present disclosure is directed toward systems and methods for predicting lighting conditions. In particular, the systems and methods described herein analyze a single low-dynamic range digital image to estimate a set of high-dynamic range lighting conditions associated with the single low-dynamic range lighting digital image. Additionally, the systems and methods described herein train a convolutional neural network to extrapolate lighting conditions from a digital image. The systems and methods also augment low-dynamic range information from the single low-dynamic range digital image by using a sky model algorithm to predict high-dynamic range lighting conditions.
    Type: Application
    Filed: February 12, 2020
    Publication date: June 11, 2020
    Inventors: Yannick Hold-Geoffroy, Sunil S. Hadap, Kalyan Krishna Sunkavalli, Emiliano Gambaretto
  • Publication number: 20200151509
    Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate lighting parameters for input images where the input images are synthetic and real low-dynamic range images. Such a neural network system can be trained using differences between a simple scene rendered using the estimated lighting parameters and the same simple scene rendered using known ground-truth lighting parameters. Such a neural network system can also be trained such that the synthetic and real low-dynamic range images are mapped in roughly the same distribution. Such a trained neural network system can be used to input a low-dynamic range image determine high-dynamic range lighting parameters.
    Type: Application
    Filed: November 12, 2018
    Publication date: May 14, 2020
    Inventors: Kalyan K. Sunkavalli, Sunil Hadap, Jonathan Eisenmann, Jinsong Zhang, Emiliano Gambaretto
  • Publication number: 20200118347
    Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.
    Type: Application
    Filed: November 29, 2018
    Publication date: April 16, 2020
    Inventors: Duygu Ceylan Aksit, Vladimir Kim, Siddhartha Chaudhuri, Radomir Mech, Noam Aigerman, Kevin Wampler, Jonathan Eisenmann, Giorgio Gori, Emiliano Gambaretto
  • Patent number: 10607329
    Abstract: Methods and systems are provided for using a single image of an indoor scene to estimate illumination of an environment that includes the portion captured in the image. A neural network system may be trained to estimate illumination by generating recovery light masks indicating a probability of each pixel within the larger environment being a light source. Additionally, low-frequency RGB images may be generated that indicating low-frequency information for the environment. The neural network system may be trained using training input images that are extracted from known panoramic images. Once trained, the neural network system infers plausible illumination information from a single image to realistically illumination images and objects being manipulated in graphics applications, such as with image compositing, modeling, and reconstruction.
    Type: Grant
    Filed: March 13, 2017
    Date of Patent: March 31, 2020
    Assignee: ADOBE INC.
    Inventors: Kalyan K. Sunkavalli, Xiaohui Shen, Mehmet Ersin Yumer, Marc-André Gardner, Emiliano Gambaretto
  • Patent number: 10609286
    Abstract: The present disclosure is directed toward systems and methods for predicting lighting conditions. In particular, the systems and methods described herein analyze a single low-dynamic range digital image to estimate a set of high-dynamic range lighting conditions associated with the single low-dynamic range lighting digital image. Additionally, the systems and methods described herein train a convolutional neural network to extrapolate lighting conditions from a digital image. The systems and methods also augment low-dynamic range information from the single low-dynamic range digital image by using a sky model algorithm to predict high-dynamic range lighting conditions.
    Type: Grant
    Filed: June 13, 2017
    Date of Patent: March 31, 2020
    Assignee: Adobe Inc.
    Inventors: Yannick Hold-Geoffroy, Sunil S. Hadap, Kalyan Krishna Sunkavalli, Emiliano Gambaretto
  • Publication number: 20200074600
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Application
    Filed: November 8, 2019
    Publication date: March 5, 2020
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Publication number: 20200074682
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters.
    Type: Application
    Filed: November 6, 2019
    Publication date: March 5, 2020
    Inventors: Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Matthew David Fisher, Jonathan Eisenmann, Emiliano Gambaretto
  • Patent number: 10565768
    Abstract: Systems and methods for generating recommendations for animations to apply to animate 3D characters in accordance with embodiments of the invention are disclosed. One embodiment includes an animation server and a database containing metadata describing a plurality of animations and the compatibility of ordered pairs of the described animations. In addition, the animation server is configured to receive requests for animation recommendations identifying a first animation, generate a recommendation of at least one animation described in the database based upon the first animation, receive a selection of an animation described in the database, and concatenate at least the first animation and the selected animation.
    Type: Grant
    Filed: July 2, 2018
    Date of Patent: February 18, 2020
    Assignee: Adobe Inc.
    Inventors: Stefano Corazza, Emiliano Gambaretto
  • Patent number: 10521970
    Abstract: Certain embodiments involve refining local parameterizations that apply two-dimensional (“2D”) images to three-dimensional (“3D”) models. For instance, a particular parameterization-initialization process is select based on one or more features of a target mesh region. An initial local parameterization for a 2D image is generated from this parameterization-initialization process. A quality metric for the initial local parameterization is computed, and the local parameterization is modified to improve the quality metric. The 3D model is modified by applying image points from the 2D image to the target mesh region in accordance with the modified local parameterization.
    Type: Grant
    Filed: February 21, 2018
    Date of Patent: December 31, 2019
    Assignee: Adobe Inc.
    Inventors: Emiliano Gambaretto, Vladimir Kim, Qingnan Zhou, Mehmet Ersin Yumer
  • Patent number: 10515460
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: December 24, 2019
    Assignee: ADOBE INC.
    Inventors: Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Matthew David Fisher, Jonathan Eisenmann, Emiliano Gambaretto