Patents by Inventor Ian Sachs
Ian Sachs has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240428492Abstract: Implementations described herein relate to methods, systems, and computer-readable media to generate animations for a 3D avatar from input video and audio captured at a client device. A camera may capture video of a face while a trained face detection model and a trained regression model output a set of video FACS weights, head poses, and facial landmarks to be translated into the animations of the 3D avatar. Additionally, a microphone may capture audio uttered by a user while a trained facial movement detection model and a trained regression model output a set of audio FACS weights. Additionally, a blending term is provided for identification of lapses in audio. A modularity mixing component fuses the video FACS weights and the audio FACS weights based on the blending term to create final FACS weights for animating the user's avatar, a character rig, or another animation-capable construct.Type: ApplicationFiled: January 25, 2024Publication date: December 26, 2024Applicant: Roblox CorporationInventors: IƱaki NAVARRO, Dario KNEUBUEHLER, Tijmen VERHULSDONCK, Eloi DU BOIS, William WELCH, Charles SHANG, Ian SACHS, Kiran BHAT
-
Publication number: 20240378836Abstract: Some implementations relate to methods, systems, and computer-readable media to create a variant of a template avatar. In some implementations, the method includes obtaining a template avatar that includes a template geometry obtained from a mesh of the template avatar, generating a template cage associated with the template avatar as a low-resolution approximation wrapped around the template geometry, creating a target cage from the template cage by modifying the template cage based on input from a user, and morphing the template geometry with the target cage to generate a target avatar that is a variant of the template avatar. The method may also include adjusting a rigging and a skinning of the target avatar to enable animation for the target avatar. Using these techniques makes it more efficient and less labor-intensive to create a variant of a template avatar.Type: ApplicationFiled: May 10, 2024Publication date: November 14, 2024Applicant: Roblox CorporationInventors: Maurice Kyojin CHU, Ronald Matthew GRISWOLD, Jihyun YOON, Michael Vincent PALLESCHI, Adam Tucker BURR, Adrian Paul LONGLAND, Ian SACHS, Kiran BHAT, Andrew Alan JOHNSON
-
Publication number: 20240355028Abstract: Implementations described herein relate to methods, systems, and computer-readable media to generate animations for a 3D avatar from input video captured at a client device. A camera may capture video of a face while a trained face detection model and a trained regression model output a set of FACS weights, head poses, and facial landmarks to be translated into the animations of the 3D avatar. Additionally, a higher level-of-detail may be intelligently selected based upon user preferences and/or computing conditions at the client device.Type: ApplicationFiled: April 30, 2024Publication date: October 24, 2024Applicant: Roblox CorporationInventors: Inaki NAVARRO, Dario KNEUBUHLER, Tijmen VERHULSDONCK, Eloi DU BOIS, Will WELCH, Vivek VERMA, Ian SACHS, Kiran BHAT
-
Patent number: 12002139Abstract: Implementations described herein relate to methods, systems, and computer-readable media to generate animations for a 3D avatar from input video captured at a client device. A camera may capture video of a face while a trained face detection model and a trained regression model output a set of FACS weights, head poses, and facial landmarks to be translated into the animations of the 3D avatar. Additionally, a higher level-of-detail may be intelligently selected based upon user preferences and/or computing conditions at the client device.Type: GrantFiled: February 22, 2022Date of Patent: June 4, 2024Assignee: Roblox CorporationInventors: Inaki Navarro, Dario Kneubuhler, Tijmen Verhulsdonck, Eloi Du Bois, Will Welch, Vivek Verma, Ian Sachs, Kiran Bhat
-
Patent number: 11551393Abstract: Systems and methods for animating from audio in accordance with embodiments of the invention are illustrated. One embodiment includes a method for generating animation from audio. The method includes steps for receiving input audio data, generating an embedding for the input audio data, and generating several predictions for several tasks from the generated embedding. The several predictions includes at least one of blendshape weights, event detection, and/or voice activity detection. The method includes steps for generating a final prediction from the several predictions, where the final prediction includes a set of blendshape weights, and generating an output based on the generated final prediction.Type: GrantFiled: July 23, 2020Date of Patent: January 10, 2023Assignee: LoomAi, Inc.Inventors: Chong Shang, Eloi Henri Homere Du Bois, Inaki Navarro, Will Welch, Rishabh Battulwar, Ian Sachs, Vivek Verma, Kiran Bhat
-
Publication number: 20220270314Abstract: Implementations described herein relate to methods, systems, and computer-readable media to generate animations for a 3D avatar from input video captured at a client device. A camera may capture video of a face while a trained face detection model and a trained regression model output a set of FACS weights, head poses, and facial landmarks to be translated into the animations of the 3D avatar. Additionally, a higher level-of-detail may be intelligently selected based upon user preferences and/or computing conditions at the client device.Type: ApplicationFiled: February 22, 2022Publication date: August 25, 2022Applicant: Roblox CorporationInventors: Inaki NAVARRO, Dario KNEUBUHLER, Tijmen VERHULSDONCK, Eloi DU BOIS, Will WELCH, Vivek VERMA, Ian SACHS, Kiran BHAT
-
Publication number: 20210027511Abstract: Systems and methods for animating from audio in accordance with embodiments of the invention are illustrated. One embodiment includes a method for generating animation from audio. The method includes steps for receiving input audio data, generating an embedding for the input audio data, and generating several predictions for several tasks from the generated embedding. The several predictions includes at least one of blendshape weights, event detection, and/or voice activity detection. The method includes steps for generating a final prediction from the several predictions, where the final prediction includes a set of blendshape weights, and generating an output based on the generated final prediction.Type: ApplicationFiled: July 23, 2020Publication date: January 28, 2021Applicant: LoomAi, Inc.Inventors: Chong Shang, Eloi Henri Homere Du Bois, Inaki Navarro, Will Welch, Rishabh Battulwar, Ian Sachs, Vivek Verma, Kiran Bhat
-
Patent number: 10559111Abstract: System and methods for computer animations of 3D models of heads generated from images of faces is disclosed. A 2D captured image that includes an image of a face can be received and used to generate a static 3D model of a head. A rig can be fit to the static 3D model to generate an animation-ready 3D generative model. Sets of rigs can be parameters that each map to particular sounds or particular facial movement observed in a video. These mappings can be used to generate a playlists of sets of rig parameters based upon received audio or video content. The playlist may be played in synchronization with an audio rendition of the audio content. Methods can receive a captured image, identify taxonomy attributes from the captured image, select a template model for the captured image, and perform a shape solve for the selected template model based on the identified taxonomy attributes.Type: GrantFiled: December 14, 2018Date of Patent: February 11, 2020Assignee: LoomAi, Inc.Inventors: Ian Sachs, Kiran Bhat, Dominic Monn, Senthil Radhakrishnan, Will Welch
-
Publication number: 20190122411Abstract: System and methods for computer animations of 3D models of heads generated from images of faces is disclosed. A 2D captured image that includes an image of a face can be received and used to generate a static 3D model of a head. A rig can be fit to the static 3D model to generate an animation-ready 3D generative model. Sets of rigs can be parameters that each map to particular sounds or particular facial movement observed in a video. These mappings can be used to generate a playlists of sets of rig parameters based upon received audio or video content. The playlist may be played in synchronization with an audio rendition of the audio content. Methods can receive a captured image, identify taxonomy attributes from the captured image, select a template model for the captured image, and perform a shape solve for the selected template model based on the identified taxonomy attributes.Type: ApplicationFiled: December 14, 2018Publication date: April 25, 2019Applicant: LoomAi, Inc.Inventors: Ian Sachs, Kiran Bhat, Dominic Monn, Senthil Radhakrishnan, Will Welch
-
Patent number: 10198845Abstract: Systems and methods for animating expressions of 3D models from captured images of a user's face in accordance with various embodiments of the invention are disclosed. In many embodiments, expressions are identified based on landmarks from images of a user's face. In certain embodiments, weights for morph targets of a 3D model are calculated based on identified landmarks and/or weights for predefined facial expressions to animate expressions for the 3D model.Type: GrantFiled: May 29, 2018Date of Patent: February 5, 2019Assignee: LoomAi, Inc.Inventors: Kiran Bhat, Mahesh Ramasubramanian, Michael Palleschi, Andrew A. Johnson, Ian Sachs
-
Patent number: 9978003Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.Type: GrantFiled: August 17, 2017Date of Patent: May 22, 2018Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
-
Publication number: 20170344860Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.Type: ApplicationFiled: August 17, 2017Publication date: November 30, 2017Inventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
-
Patent number: 9773196Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.Type: GrantFiled: January 25, 2016Date of Patent: September 26, 2017Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
-
Publication number: 20170213112Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.Type: ApplicationFiled: January 25, 2016Publication date: July 27, 2017Inventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
-
Patent number: 9576351Abstract: Techniques are disclosed for automatically transferring a style of at least two reference images to an input image. The resulting transformation of the input image matches the visual styles of the reference images without changing the identity of the subject of the input image. Each image is decomposed into levels of detail with corresponding energy levels and a residual. A style transfer operation is performed at each energy level and residual using the reference image that most closely matches the input image at each energy level. The transformations of each level of detail and the residual of the input image are aggregated to generate an output image having the styles of the reference images. In some cases, the transformations are performed on the foreground of the input image, and the background can be transformed by an amount that is proportional to the aggregated transformations of the foreground.Type: GrantFiled: November 19, 2015Date of Patent: February 21, 2017Assignee: Adobe Systems IncorporatedInventors: Ronen Barzel, Sylvain Paris, Robert Bailey, Ian Sachs