Patents by Inventor Emiliano Gambaretto

Emiliano Gambaretto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10475169
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: November 12, 2019
    Assignee: Adobe Inc.
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Publication number: 20190259216
    Abstract: Certain embodiments involve refining local parameterizations that apply two-dimensional (“2D”) images to three-dimensional (“3D”) models. For instance, a particular parameterization-initialization process is select based on one or more features of a target mesh region. An initial local parameterization for a 2D image is generated from this parameterization-initialization process. A quality metric for the initial local parameterization is computed, and the local parameterization is modified to improve the quality metric. The 3D model is modified by applying image points from the 2D image to the target mesh region in accordance with the modified local parameterization.
    Type: Application
    Filed: February 21, 2018
    Publication date: August 22, 2019
    Inventors: Emiliano Gambaretto, Vladimir Kim, Qingnan Zhou, Mehmet Ersin Yumer
  • Publication number: 20190164261
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Application
    Filed: November 28, 2017
    Publication date: May 30, 2019
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Publication number: 20190164312
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters.
    Type: Application
    Filed: November 29, 2017
    Publication date: May 30, 2019
    Inventors: Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Matthew David Fisher, Jonathan Eisenmann, Emiliano Gambaretto
  • Publication number: 20180359416
    Abstract: The present disclosure is directed toward systems and methods for predicting lighting conditions. In particular, the systems and methods described herein analyze a single low-dynamic range digital image to estimate a set of high-dynamic range lighting conditions associated with the single low-dynamic range lighting digital image. Additionally, the systems and methods described herein train a convolutional neural network to extrapolate lighting conditions from a digital image. The systems and methods also augment low-dynamic range information from the single low-dynamic range digital image by using a sky model algorithm to predict high-dynamic range lighting conditions.
    Type: Application
    Filed: June 13, 2017
    Publication date: December 13, 2018
    Inventors: Yannick Hold-Geoffroy, Sunil S. Hadap, Kalyan Krishna Sunkavalli, Emiliano Gambaretto
  • Publication number: 20180315231
    Abstract: Systems and methods for generating recommendations for animations to apply to animate 3D characters in accordance with embodiments of the invention are disclosed. One embodiment includes an animation server and a database containing metadata describing a plurality of animations and the compatibility of ordered pairs of the described animations. In addition, the animation server is configured to receive requests for animation recommendations identifying a first animation, generate a recommendation of at least one animation described in the database based upon the first animation, receive a selection of an animation described in the database, and concatenate at least the first animation and the selected animation.
    Type: Application
    Filed: July 2, 2018
    Publication date: November 1, 2018
    Inventors: Stefano Corazza, Emiliano Gambaretto
  • Publication number: 20180260975
    Abstract: Methods and systems are provided for using a single image of an indoor scene to estimate illumination of an environment that includes the portion captured in the image. A neural network system may be trained to estimate illumination by generating recovery light masks indicating a probability of each pixel within the larger environment being a light source. Additionally, low-frequency RGB images may be generated that indicating low-frequency information for the environment. The neural network system may be trained using training input images that are extracted from known panoramic images. Once trained, the neural network system infers plausible illumination information from a single image to realistically illumination images and objects being manipulated in graphics applications, such as with image compositing, modeling, and reconstruction.
    Type: Application
    Filed: March 13, 2017
    Publication date: September 13, 2018
    Inventors: KALYAN K. SUNKAVALLI, XIAOHUI SHEN, MEHMET ERSIN YUMER, MARC-ANDRÉ GARDNER, EMILIANO GAMBARETTO
  • Patent number: 10049482
    Abstract: Systems and methods for generating recommendations for animations to apply to animate 3D characters in accordance with embodiments of the invention are disclosed. One embodiment includes an animation server and a database containing metadata describing a plurality of animations and the compatibility of ordered pairs of the described animations. In addition, the animation server is configured to receive requests for animation recommendations identifying a first animation, generate a recommendation of at least one animation described in the database based upon the first animation, receive a selection of an animation described in the database, and concatenate at least the first animation and the selected animation.
    Type: Grant
    Filed: July 23, 2012
    Date of Patent: August 14, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Stefano Corazza, Emiliano Gambaretto
  • Patent number: 9978175
    Abstract: Systems and methods for automatically generating animation-ready 3D character models based upon model parameter, clothing selections, and texture-region color component selections are described. One embodiment of the invention includes an application server configured to receive the user defined model parameters and the at least one texture selection via a user interface. In addition, the application server includes a generative model and the application server is configured to generate a 3D mesh based upon the user defined model parameters using the generative model and to apply texture to the generated mesh based upon the at least one texture selection.
    Type: Grant
    Filed: March 16, 2015
    Date of Patent: May 22, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Stefano Corazza, Emiliano Gambaretto
  • Patent number: 9911220
    Abstract: The present disclosure is directed to integrating external 3D models into a character creation system. In general, a character creation system imports an external 3D model by determining correspondence values for each vertex within the 3D model. Once imported, a user can customize the 3D character by adding texture to the character, adjusting character features, swapping out one or more character features, adding clothes and accessories to the character, automatically rigging the character, and/or animating the character.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: March 6, 2018
    Assignee: ADOBE SYSTES INCORPORATED
    Inventors: Stefano Corazza, Emiliano Gambaretto, Charles Piña, Daniel Babcock
  • Patent number: 9619914
    Abstract: Systems and methods are described for animating 3D characters using synthetic motion data generated by motion models in response to a high level description of a desired sequence of motion provided by an animator. In a number of embodiments, the synthetic motion data is streamed to a user device that includes a rendering engine and the user device renders an animation of a 3D character using the streamed synthetic motion data. In several embodiments, an animator can upload a custom model of a 3D character or a custom 3D character is generated by the server system in response to a high level description of a desired 3D character provided by the user and the synthetic motion data generated by the generative model is retargeted to animate the custom 3D character.
    Type: Grant
    Filed: December 2, 2013
    Date of Patent: April 11, 2017
    Assignee: FACEBOOK, INC.
    Inventors: Edilson de Aguiar, Emiliano Gambaretto, Stefano Corazza
  • Patent number: 9460539
    Abstract: Systems and methods are described for performing spatial and temporal compression of deformable mesh based representations of 3D character motion allowing the visualization of high-resolution 3D character animations in real time. In a number of embodiments, the deformable mesh based representation of the 3D character motion is used to automatically generate an interconnected graph based representation of the same 3D character motion. The interconnected graph based representation can include an interconnected graph that is used to drive mesh clusters during the rendering of a 3D character animation. The interconnected graph based representation provides spatial compression of the deformable mesh based representation, and further compression can be achieved by applying temporal compression processes to the time-varying behavior of the mesh clusters.
    Type: Grant
    Filed: June 6, 2014
    Date of Patent: October 4, 2016
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Edilson de Aguiar, Stefano Corazza, Emiliano Gambaretto
  • Patent number: 9305387
    Abstract: Systems and methods for automatically generating animation-ready 3D character models based upon model parameter and clothing selections are described. One embodiment of the invention includes an application server configured to receive the user defined model parameters and the clothing selection via a user interface.
    Type: Grant
    Filed: February 24, 2014
    Date of Patent: April 5, 2016
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Stefano Corazza, Emiliano Gambaretto
  • Publication number: 20160027200
    Abstract: The present disclosure is directed to integrating external 3D models into a character creation system. In general, a character creation system imports an external 3D model by determining correspondence values for each vertex within the 3D model. Once imported, a user can customize the 3D character by adding texture to the character, adjusting character features, swapping out one or more character features, adding clothes and accessories to the character, automatically rigging the character, and/or animating the character.
    Type: Application
    Filed: July 28, 2015
    Publication date: January 28, 2016
    Inventors: Stefano Corazza, Emiliano Gambaretto, Charles Piña, Daniel Babcock
  • Publication number: 20150193975
    Abstract: Systems and methods for automatically generating animation-ready 3D character models based upon model parameter, clothing selections, and texture-region color component selections are described. One embodiment of the invention includes an application server configured to receive the user defined model parameters and the at least one texture selection via a user interface. In addition, the application server includes a generative model and the application server is configured to generate a 3D mesh based upon the user defined model parameters using the generative model and to apply texture to the generated mesh based upon the at least one texture selection.
    Type: Application
    Filed: March 16, 2015
    Publication date: July 9, 2015
    Inventors: Stefano Corazza, Emiliano Gambaretto
  • Publication number: 20150145859
    Abstract: Systems and methods for animating 3D characters using a non-rigged mesh or a group of non-rigged meshes that define the appearance of the character are illustrated. Certain embodiments disclose a process for the automatic rigging or autorigging of a non-rigged mesh or meshes. In one embodiment, a method of automatically rigging at least one mesh defining the external appearance of a 3D character includes creating a 3D representation of the external appearance of the 3D character defined by the at least one mesh, where the 3D representation is a single closed form mesh, identifying salient points of the 3D representation, fitting a reference skeleton to the 3D representation, calculating skinning weights for the 3D representation based upon the fitted skeleton, and automatically rigging the 3D character by transferring the skeleton and skinning weights generated with respect to the 3D representation to the at least one mesh defining the external appearance of the 3D character.
    Type: Application
    Filed: August 4, 2014
    Publication date: May 28, 2015
    Inventors: Stefano Corazza, Emiliano Gambaretto
  • Patent number: 8982122
    Abstract: Systems and methods for automatically generating animation-ready 3D character models based upon model parameter, clothing selections, and texture-region color component selections are described. One embodiment of the invention includes an application server configured to receive the user defined model parameters and the at least one texture selection via a user interface. In addition, the application server includes a generative model and the application server is configured to generate a 3D mesh based upon the user defined model parameters using the generative model and to apply texture to the generated mesh based upon the at least one texture selection.
    Type: Grant
    Filed: March 25, 2011
    Date of Patent: March 17, 2015
    Assignee: Mixamo, Inc.
    Inventors: Stefano Corazza, Emiliano Gambaretto
  • Patent number: 8928672
    Abstract: Systems and methods for generating and concatenating 3D character animations are described including systems in which recommendations are made by the animation system concerning motions that smoothly transition when concatenated. One embodiment includes a server system connected to a communication network and configured to communicate with a user device that is also connected to the communication network.
    Type: Grant
    Filed: April 28, 2011
    Date of Patent: January 6, 2015
    Assignee: Mixamo, Inc.
    Inventors: Stefano Corazza, Emiliano Gambaretto
  • Publication number: 20140313192
    Abstract: Systems and methods for automatically generating animation-ready 3D character models based upon model parameter and clothing selections are described. One embodiment of the invention includes an application server configured to receive the user defined model parameters and the clothing selection via a user interface.
    Type: Application
    Filed: February 24, 2014
    Publication date: October 23, 2014
    Applicant: Mixamo, Inc.
    Inventors: Stefano Corazza, Emiliano Gambaretto
  • Publication number: 20140285496
    Abstract: Systems and methods are described for performing spatial and temporal compression of deformable mesh based representations of 3D character motion allowing the visualization of high-resolution 3D character animations in real time. In a number of embodiments, the deformable mesh based representation of the 3D character motion is used to automatically generate an interconnected graph based representation of the same 3D character motion. The interconnected graph based representation can include an interconnected graph that is used to drive mesh clusters during the rendering of a 3D character animation. The interconnected graph based representation provides spatial compression of the deformable mesh based representation, and further compression can be achieved by applying temporal compression processes to the time-varying behavior of the mesh clusters.
    Type: Application
    Filed: June 6, 2014
    Publication date: September 25, 2014
    Applicant: Mixamo, Inc.
    Inventors: Edilson de Aguiar, Stefano Corazza, Emiliano Gambaretto