Patents by Inventor Emiliano Gambaretto
Emiliano Gambaretto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10475169Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.Type: GrantFiled: November 28, 2017Date of Patent: November 12, 2019Assignee: Adobe Inc.Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
-
Publication number: 20190259216Abstract: Certain embodiments involve refining local parameterizations that apply two-dimensional (“2D”) images to three-dimensional (“3D”) models. For instance, a particular parameterization-initialization process is select based on one or more features of a target mesh region. An initial local parameterization for a 2D image is generated from this parameterization-initialization process. A quality metric for the initial local parameterization is computed, and the local parameterization is modified to improve the quality metric. The 3D model is modified by applying image points from the 2D image to the target mesh region in accordance with the modified local parameterization.Type: ApplicationFiled: February 21, 2018Publication date: August 22, 2019Inventors: Emiliano Gambaretto, Vladimir Kim, Qingnan Zhou, Mehmet Ersin Yumer
-
Publication number: 20190164261Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.Type: ApplicationFiled: November 28, 2017Publication date: May 30, 2019Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
-
Publication number: 20190164312Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters.Type: ApplicationFiled: November 29, 2017Publication date: May 30, 2019Inventors: Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Matthew David Fisher, Jonathan Eisenmann, Emiliano Gambaretto
-
Publication number: 20180359416Abstract: The present disclosure is directed toward systems and methods for predicting lighting conditions. In particular, the systems and methods described herein analyze a single low-dynamic range digital image to estimate a set of high-dynamic range lighting conditions associated with the single low-dynamic range lighting digital image. Additionally, the systems and methods described herein train a convolutional neural network to extrapolate lighting conditions from a digital image. The systems and methods also augment low-dynamic range information from the single low-dynamic range digital image by using a sky model algorithm to predict high-dynamic range lighting conditions.Type: ApplicationFiled: June 13, 2017Publication date: December 13, 2018Inventors: Yannick Hold-Geoffroy, Sunil S. Hadap, Kalyan Krishna Sunkavalli, Emiliano Gambaretto
-
Publication number: 20180315231Abstract: Systems and methods for generating recommendations for animations to apply to animate 3D characters in accordance with embodiments of the invention are disclosed. One embodiment includes an animation server and a database containing metadata describing a plurality of animations and the compatibility of ordered pairs of the described animations. In addition, the animation server is configured to receive requests for animation recommendations identifying a first animation, generate a recommendation of at least one animation described in the database based upon the first animation, receive a selection of an animation described in the database, and concatenate at least the first animation and the selected animation.Type: ApplicationFiled: July 2, 2018Publication date: November 1, 2018Inventors: Stefano Corazza, Emiliano Gambaretto
-
Publication number: 20180260975Abstract: Methods and systems are provided for using a single image of an indoor scene to estimate illumination of an environment that includes the portion captured in the image. A neural network system may be trained to estimate illumination by generating recovery light masks indicating a probability of each pixel within the larger environment being a light source. Additionally, low-frequency RGB images may be generated that indicating low-frequency information for the environment. The neural network system may be trained using training input images that are extracted from known panoramic images. Once trained, the neural network system infers plausible illumination information from a single image to realistically illumination images and objects being manipulated in graphics applications, such as with image compositing, modeling, and reconstruction.Type: ApplicationFiled: March 13, 2017Publication date: September 13, 2018Inventors: KALYAN K. SUNKAVALLI, XIAOHUI SHEN, MEHMET ERSIN YUMER, MARC-ANDRÉ GARDNER, EMILIANO GAMBARETTO
-
Patent number: 10049482Abstract: Systems and methods for generating recommendations for animations to apply to animate 3D characters in accordance with embodiments of the invention are disclosed. One embodiment includes an animation server and a database containing metadata describing a plurality of animations and the compatibility of ordered pairs of the described animations. In addition, the animation server is configured to receive requests for animation recommendations identifying a first animation, generate a recommendation of at least one animation described in the database based upon the first animation, receive a selection of an animation described in the database, and concatenate at least the first animation and the selected animation.Type: GrantFiled: July 23, 2012Date of Patent: August 14, 2018Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Stefano Corazza, Emiliano Gambaretto
-
Patent number: 9978175Abstract: Systems and methods for automatically generating animation-ready 3D character models based upon model parameter, clothing selections, and texture-region color component selections are described. One embodiment of the invention includes an application server configured to receive the user defined model parameters and the at least one texture selection via a user interface. In addition, the application server includes a generative model and the application server is configured to generate a 3D mesh based upon the user defined model parameters using the generative model and to apply texture to the generated mesh based upon the at least one texture selection.Type: GrantFiled: March 16, 2015Date of Patent: May 22, 2018Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Stefano Corazza, Emiliano Gambaretto
-
Patent number: 9911220Abstract: The present disclosure is directed to integrating external 3D models into a character creation system. In general, a character creation system imports an external 3D model by determining correspondence values for each vertex within the 3D model. Once imported, a user can customize the 3D character by adding texture to the character, adjusting character features, swapping out one or more character features, adding clothes and accessories to the character, automatically rigging the character, and/or animating the character.Type: GrantFiled: July 28, 2015Date of Patent: March 6, 2018Assignee: ADOBE SYSTES INCORPORATEDInventors: Stefano Corazza, Emiliano Gambaretto, Charles Piña, Daniel Babcock
-
Patent number: 9619914Abstract: Systems and methods are described for animating 3D characters using synthetic motion data generated by motion models in response to a high level description of a desired sequence of motion provided by an animator. In a number of embodiments, the synthetic motion data is streamed to a user device that includes a rendering engine and the user device renders an animation of a 3D character using the streamed synthetic motion data. In several embodiments, an animator can upload a custom model of a 3D character or a custom 3D character is generated by the server system in response to a high level description of a desired 3D character provided by the user and the synthetic motion data generated by the generative model is retargeted to animate the custom 3D character.Type: GrantFiled: December 2, 2013Date of Patent: April 11, 2017Assignee: FACEBOOK, INC.Inventors: Edilson de Aguiar, Emiliano Gambaretto, Stefano Corazza
-
Patent number: 9460539Abstract: Systems and methods are described for performing spatial and temporal compression of deformable mesh based representations of 3D character motion allowing the visualization of high-resolution 3D character animations in real time. In a number of embodiments, the deformable mesh based representation of the 3D character motion is used to automatically generate an interconnected graph based representation of the same 3D character motion. The interconnected graph based representation can include an interconnected graph that is used to drive mesh clusters during the rendering of a 3D character animation. The interconnected graph based representation provides spatial compression of the deformable mesh based representation, and further compression can be achieved by applying temporal compression processes to the time-varying behavior of the mesh clusters.Type: GrantFiled: June 6, 2014Date of Patent: October 4, 2016Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Edilson de Aguiar, Stefano Corazza, Emiliano Gambaretto
-
Patent number: 9305387Abstract: Systems and methods for automatically generating animation-ready 3D character models based upon model parameter and clothing selections are described. One embodiment of the invention includes an application server configured to receive the user defined model parameters and the clothing selection via a user interface.Type: GrantFiled: February 24, 2014Date of Patent: April 5, 2016Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Stefano Corazza, Emiliano Gambaretto
-
Publication number: 20160027200Abstract: The present disclosure is directed to integrating external 3D models into a character creation system. In general, a character creation system imports an external 3D model by determining correspondence values for each vertex within the 3D model. Once imported, a user can customize the 3D character by adding texture to the character, adjusting character features, swapping out one or more character features, adding clothes and accessories to the character, automatically rigging the character, and/or animating the character.Type: ApplicationFiled: July 28, 2015Publication date: January 28, 2016Inventors: Stefano Corazza, Emiliano Gambaretto, Charles Piña, Daniel Babcock
-
Publication number: 20150193975Abstract: Systems and methods for automatically generating animation-ready 3D character models based upon model parameter, clothing selections, and texture-region color component selections are described. One embodiment of the invention includes an application server configured to receive the user defined model parameters and the at least one texture selection via a user interface. In addition, the application server includes a generative model and the application server is configured to generate a 3D mesh based upon the user defined model parameters using the generative model and to apply texture to the generated mesh based upon the at least one texture selection.Type: ApplicationFiled: March 16, 2015Publication date: July 9, 2015Inventors: Stefano Corazza, Emiliano Gambaretto
-
Publication number: 20150145859Abstract: Systems and methods for animating 3D characters using a non-rigged mesh or a group of non-rigged meshes that define the appearance of the character are illustrated. Certain embodiments disclose a process for the automatic rigging or autorigging of a non-rigged mesh or meshes. In one embodiment, a method of automatically rigging at least one mesh defining the external appearance of a 3D character includes creating a 3D representation of the external appearance of the 3D character defined by the at least one mesh, where the 3D representation is a single closed form mesh, identifying salient points of the 3D representation, fitting a reference skeleton to the 3D representation, calculating skinning weights for the 3D representation based upon the fitted skeleton, and automatically rigging the 3D character by transferring the skeleton and skinning weights generated with respect to the 3D representation to the at least one mesh defining the external appearance of the 3D character.Type: ApplicationFiled: August 4, 2014Publication date: May 28, 2015Inventors: Stefano Corazza, Emiliano Gambaretto
-
Patent number: 8982122Abstract: Systems and methods for automatically generating animation-ready 3D character models based upon model parameter, clothing selections, and texture-region color component selections are described. One embodiment of the invention includes an application server configured to receive the user defined model parameters and the at least one texture selection via a user interface. In addition, the application server includes a generative model and the application server is configured to generate a 3D mesh based upon the user defined model parameters using the generative model and to apply texture to the generated mesh based upon the at least one texture selection.Type: GrantFiled: March 25, 2011Date of Patent: March 17, 2015Assignee: Mixamo, Inc.Inventors: Stefano Corazza, Emiliano Gambaretto
-
Patent number: 8928672Abstract: Systems and methods for generating and concatenating 3D character animations are described including systems in which recommendations are made by the animation system concerning motions that smoothly transition when concatenated. One embodiment includes a server system connected to a communication network and configured to communicate with a user device that is also connected to the communication network.Type: GrantFiled: April 28, 2011Date of Patent: January 6, 2015Assignee: Mixamo, Inc.Inventors: Stefano Corazza, Emiliano Gambaretto
-
Publication number: 20140313192Abstract: Systems and methods for automatically generating animation-ready 3D character models based upon model parameter and clothing selections are described. One embodiment of the invention includes an application server configured to receive the user defined model parameters and the clothing selection via a user interface.Type: ApplicationFiled: February 24, 2014Publication date: October 23, 2014Applicant: Mixamo, Inc.Inventors: Stefano Corazza, Emiliano Gambaretto
-
Publication number: 20140285496Abstract: Systems and methods are described for performing spatial and temporal compression of deformable mesh based representations of 3D character motion allowing the visualization of high-resolution 3D character animations in real time. In a number of embodiments, the deformable mesh based representation of the 3D character motion is used to automatically generate an interconnected graph based representation of the same 3D character motion. The interconnected graph based representation can include an interconnected graph that is used to drive mesh clusters during the rendering of a 3D character animation. The interconnected graph based representation provides spatial compression of the deformable mesh based representation, and further compression can be achieved by applying temporal compression processes to the time-varying behavior of the mesh clusters.Type: ApplicationFiled: June 6, 2014Publication date: September 25, 2014Applicant: Mixamo, Inc.Inventors: Edilson de Aguiar, Stefano Corazza, Emiliano Gambaretto