Patents by Inventor Stefano Corazza
Stefano Corazza has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11763506Abstract: The present disclosure relates to an AR animation generation system that detects a change in position of a mobile computing system in a real-world environment, determines that a position for a virtual object in an augmented reality (AR) scene is to be changed from a first position in the AR scene to a second position in the AR scene, identifies an animation profile to be used for animating the virtual object, wherein the animation profile is associated with the virtual object, and animates the virtual object in the AR scene using the animation profile. Animating the virtual object in the AR scene includes moving the virtual object in the AR scene from the first position to the second position along a path, wherein the path and a movement of the virtual object along the path are determined based on the animation profile.Type: GrantFiled: April 15, 2021Date of Patent: September 19, 2023Assignee: Adobe Inc.Inventors: Yaniv De Ridder, Stefano Corazza, Lee Brimelow, Erwan Maigret, David Montero
-
Patent number: 11488342Abstract: Embodiments of the technology described herein, make unknown material-maps in a Physically Based Rendering (PBR) asset usable through an identification process that relies, at least in part, on image analysis. In addition, when a desired material-map type is completely missing from a PBR asset the technology described herein may generate a suitable synthetic material map for use in rendering. In one aspect, the correct map type is assigned using a machine classifier, such as a convolutional neural network, which analyzes image content of the unknown material map and produce a classification. The technology described herein also correlates material maps into material definitions using a combination of the material-map type and similarity analysis. The technology described herein may generate synthetic maps to be used in place of the missing material maps. The synthetic maps may be generated using a Generative Adversarial Network (GAN).Type: GrantFiled: May 27, 2021Date of Patent: November 1, 2022Assignee: ADOBE INC.Inventors: Kalyan Krishna Sunkavalli, Yannick Hold-Geoffroy, Milos Hasan, Zexiang Xu, Yu-Ying Yeh, Stefano Corazza
-
Patent number: 11170558Abstract: A system and method for automatic rigging of three dimensional characters for facial animation provide a rigged mesh for an original three dimensional mesh. A representative mesh is generated from the original mesh. Segments, key points, a bone set, and skinning weights are then determined for the representative mesh. The Skinning weights and bone set are placed in the original mesh to generate the rigged mesh.Type: GrantFiled: July 2, 2020Date of Patent: November 9, 2021Assignee: ADOBE INC.Inventors: Stefano Corazza, Emiliano Gambaretto, Prasanna Vasudevan
-
Publication number: 20210256751Abstract: The present disclosure relates to an AR animation generation system that detects a change in position of a mobile computing system in a real-world environment, determines that a position for a virtual object in an augmented reality (AR) scene is to be changed from a first position in the AR scene to a second position in the AR scene, identifies an animation profile to be used for animating the virtual object, wherein the animation profile is associated with the virtual object, and animates the virtual object in the AR scene using the animation profile. Animating the virtual object in the AR scene includes moving the virtual object in the AR scene from the first position to the second position along a path, wherein the path and a movement of the virtual object along the path are determined based on the animation profile.Type: ApplicationFiled: April 15, 2021Publication date: August 19, 2021Inventors: Yaniv De Ridder, Stefano Corazza, Lee Brimelow, Erwan Maigret, David Montero
-
Patent number: 10984574Abstract: The present disclosure relates to an AR animation generation system identifies an animation profile for animating the virtual object displayed in an augmented reality (AR) scene. The AR animation generation system creates a link between the virtual object and the mobile computing system based upon a position of the virtual object within the AR scene and a position of a mobile device in a real-world environment. The link enables determining for each position of the mobile device in the real-world environment, a corresponding position for the virtual object in the AR scene.Type: GrantFiled: November 22, 2019Date of Patent: April 20, 2021Assignee: Adobe Inc.Inventors: Yaniv De Ridder, Stefano Corazza, Lee Brimelow, Erwan Maigret, David Montero
-
Publication number: 20200334892Abstract: A system and method for automatic rigging of three dimensional characters for facial animation provide a rigged mesh for an original three dimensional mesh. A representative mesh is generated from the original mesh. Segments, key points, a bone set, and skinning weights are then determined for the representative mesh. The Skinning weights and bone set are placed in the original mesh to generate the rigged mesh.Type: ApplicationFiled: July 2, 2020Publication date: October 22, 2020Inventors: Stefano Corazza, Emiliano Gambaretto, Prasanna Vasudevan
-
Patent number: 10748325Abstract: A system and method for automatic rigging of three dimensional characters for facial animation provide a rigged mesh for an original three dimensional mesh. A representative mesh is generated from the original mesh. Segments, key points, a bone set, and skinning weights are then determined for the representative mesh. The Skinning weights and bone set are placed in the original mesh to generate the rigged mesh.Type: GrantFiled: November 19, 2012Date of Patent: August 18, 2020Assignee: ADOBE INC.Inventors: Stefano Corazza, Emiliano Gambaretto, Prasanna Vasudevan
-
Patent number: 10565768Abstract: Systems and methods for generating recommendations for animations to apply to animate 3D characters in accordance with embodiments of the invention are disclosed. One embodiment includes an animation server and a database containing metadata describing a plurality of animations and the compatibility of ordered pairs of the described animations. In addition, the animation server is configured to receive requests for animation recommendations identifying a first animation, generate a recommendation of at least one animation described in the database based upon the first animation, receive a selection of an animation described in the database, and concatenate at least the first animation and the selected animation.Type: GrantFiled: July 2, 2018Date of Patent: February 18, 2020Assignee: Adobe Inc.Inventors: Stefano Corazza, Emiliano Gambaretto
-
Publication number: 20180315231Abstract: Systems and methods for generating recommendations for animations to apply to animate 3D characters in accordance with embodiments of the invention are disclosed. One embodiment includes an animation server and a database containing metadata describing a plurality of animations and the compatibility of ordered pairs of the described animations. In addition, the animation server is configured to receive requests for animation recommendations identifying a first animation, generate a recommendation of at least one animation described in the database based upon the first animation, receive a selection of an animation described in the database, and concatenate at least the first animation and the selected animation.Type: ApplicationFiled: July 2, 2018Publication date: November 1, 2018Inventors: Stefano Corazza, Emiliano Gambaretto
-
Patent number: 10049482Abstract: Systems and methods for generating recommendations for animations to apply to animate 3D characters in accordance with embodiments of the invention are disclosed. One embodiment includes an animation server and a database containing metadata describing a plurality of animations and the compatibility of ordered pairs of the described animations. In addition, the animation server is configured to receive requests for animation recommendations identifying a first animation, generate a recommendation of at least one animation described in the database based upon the first animation, receive a selection of an animation described in the database, and concatenate at least the first animation and the selected animation.Type: GrantFiled: July 23, 2012Date of Patent: August 14, 2018Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Stefano Corazza, Emiliano Gambaretto
-
Patent number: 9978175Abstract: Systems and methods for automatically generating animation-ready 3D character models based upon model parameter, clothing selections, and texture-region color component selections are described. One embodiment of the invention includes an application server configured to receive the user defined model parameters and the at least one texture selection via a user interface. In addition, the application server includes a generative model and the application server is configured to generate a 3D mesh based upon the user defined model parameters using the generative model and to apply texture to the generated mesh based upon the at least one texture selection.Type: GrantFiled: March 16, 2015Date of Patent: May 22, 2018Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Stefano Corazza, Emiliano Gambaretto
-
Patent number: 9911220Abstract: The present disclosure is directed to integrating external 3D models into a character creation system. In general, a character creation system imports an external 3D model by determining correspondence values for each vertex within the 3D model. Once imported, a user can customize the 3D character by adding texture to the character, adjusting character features, swapping out one or more character features, adding clothes and accessories to the character, automatically rigging the character, and/or animating the character.Type: GrantFiled: July 28, 2015Date of Patent: March 6, 2018Assignee: ADOBE SYSTES INCORPORATEDInventors: Stefano Corazza, Emiliano Gambaretto, Charles Piña, Daniel Babcock
-
Patent number: 9747495Abstract: Systems and methods in accordance with embodiments of the invention enable collaborative creation, transmission, sharing, non-linear exploration, and modification of animated video messages. One embodiment includes a video camera, a processor, a network interface, and storage containing an animated message application, and a 3D character model. In addition, the animated message application configures the processor to: capture a video sequence using the video camera; detect a human face within a sequence of video frames; track changes in human facial expression of a human face detected within a sequence of video frames; map tracked changes in human facial expression to motion data, where the motion data is generated to animate the 3D character model; apply motion data to animate the 3D character model; render an animation of the 3D character model into a file as encoded video; and transmit the encoded video to a remote device via the network interface.Type: GrantFiled: March 6, 2013Date of Patent: August 29, 2017Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Stefano Corazza, Daniel Babcock, Charles Pina, Sylvio Drouin
-
Patent number: 9626788Abstract: Systems and methods in accordance with embodiments of the invention enable collaborative creation, transmission, sharing, non-linear exploration, and modification of animated video messages. One embodiment includes a video camera, a processor, a network interface, and storage containing an animated message application, and a 3D character model. In addition, the animated message application configures the processor to: capture a video sequence using the video camera; detect a human face within a sequence of video frames; track changes in human facial expression of a human face detected within a sequence of video frames; map tracked changes in human facial expression to motion data, where the motion data is generated to animate the 3D character model; apply motion data to animate the 3D character model; render an animation of the 3D character model into a file as encoded video; and transmit the encoded video to a remote device via the network interface.Type: GrantFiled: February 16, 2016Date of Patent: April 18, 2017Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Stefano Corazza, Daniel J. Babcock, Charles Pina, Sylvio Drouin
-
Patent number: 9619914Abstract: Systems and methods are described for animating 3D characters using synthetic motion data generated by motion models in response to a high level description of a desired sequence of motion provided by an animator. In a number of embodiments, the synthetic motion data is streamed to a user device that includes a rendering engine and the user device renders an animation of a 3D character using the streamed synthetic motion data. In several embodiments, an animator can upload a custom model of a 3D character or a custom 3D character is generated by the server system in response to a high level description of a desired 3D character provided by the user and the synthetic motion data generated by the generative model is retargeted to animate the custom 3D character.Type: GrantFiled: December 2, 2013Date of Patent: April 11, 2017Assignee: FACEBOOK, INC.Inventors: Edilson de Aguiar, Emiliano Gambaretto, Stefano Corazza
-
Patent number: 9460539Abstract: Systems and methods are described for performing spatial and temporal compression of deformable mesh based representations of 3D character motion allowing the visualization of high-resolution 3D character animations in real time. In a number of embodiments, the deformable mesh based representation of the 3D character motion is used to automatically generate an interconnected graph based representation of the same 3D character motion. The interconnected graph based representation can include an interconnected graph that is used to drive mesh clusters during the rendering of a 3D character animation. The interconnected graph based representation provides spatial compression of the deformable mesh based representation, and further compression can be achieved by applying temporal compression processes to the time-varying behavior of the mesh clusters.Type: GrantFiled: June 6, 2014Date of Patent: October 4, 2016Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Edilson de Aguiar, Stefano Corazza, Emiliano Gambaretto
-
Patent number: 9373185Abstract: Systems and methods are described for animating 3D characters using synthetic motion data generated by generative models in response to a high level description of a desired sequence of motion provided by an animator. In a number of embodiments, an animation system is accessible via a server system that utilizes the ability of generative models to generate synthetic motion data across a continuum to enable multiple animators to effectively reuse the same set of previously recorded motion capture data to produce a wide variety of desired animation sequences. One embodiment of the invention includes a server system configured to communicate with a database containing motion data including repeated sequences of motion, where the differences between the repeated sequences of motion are described using at least one high level characteristic.Type: GrantFiled: April 21, 2014Date of Patent: June 21, 2016Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Graham Taylor, Stefano Corazza, Nazim Kareemi, Edilson de Aguiar
-
Publication number: 20160163084Abstract: Systems and methods in accordance with embodiments of the invention enable collaborative creation, transmission, sharing, non-linear exploration, and modification of animated video messages. One embodiment includes a video camera, a processor, a network interface, and storage containing an animated message application, and a 3D character model. In addition, the animated message application configures the processor to: capture a video sequence using the video camera; detect a human face within a sequence of video frames; track changes in human facial expression of a human face detected within a sequence of video frames; map tracked changes in human facial expression to motion data, where the motion data is generated to animate the 3D character model; apply motion data to animate the 3D character model; render an animation of the 3D character model into a file as encoded video; and transmit the encoded video to a remote device via the network interface.Type: ApplicationFiled: February 16, 2016Publication date: June 9, 2016Inventors: Stefano Corazza, Daniel J. Babcock, Charles Pina, Sylvio Drouin
-
Patent number: 9305387Abstract: Systems and methods for automatically generating animation-ready 3D character models based upon model parameter and clothing selections are described. One embodiment of the invention includes an application server configured to receive the user defined model parameters and the clothing selection via a user interface.Type: GrantFiled: February 24, 2014Date of Patent: April 5, 2016Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Stefano Corazza, Emiliano Gambaretto
-
Publication number: 20160027200Abstract: The present disclosure is directed to integrating external 3D models into a character creation system. In general, a character creation system imports an external 3D model by determining correspondence values for each vertex within the 3D model. Once imported, a user can customize the 3D character by adding texture to the character, adjusting character features, swapping out one or more character features, adding clothes and accessories to the character, automatically rigging the character, and/or animating the character.Type: ApplicationFiled: July 28, 2015Publication date: January 28, 2016Inventors: Stefano Corazza, Emiliano Gambaretto, Charles Piña, Daniel Babcock