Patents by Inventor Ian Sachs

Ian Sachs has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12002139
    Abstract: Implementations described herein relate to methods, systems, and computer-readable media to generate animations for a 3D avatar from input video captured at a client device. A camera may capture video of a face while a trained face detection model and a trained regression model output a set of FACS weights, head poses, and facial landmarks to be translated into the animations of the 3D avatar. Additionally, a higher level-of-detail may be intelligently selected based upon user preferences and/or computing conditions at the client device.
    Type: Grant
    Filed: February 22, 2022
    Date of Patent: June 4, 2024
    Assignee: Roblox Corporation
    Inventors: Inaki Navarro, Dario Kneubuhler, Tijmen Verhulsdonck, Eloi Du Bois, Will Welch, Vivek Verma, Ian Sachs, Kiran Bhat
  • Patent number: 11551393
    Abstract: Systems and methods for animating from audio in accordance with embodiments of the invention are illustrated. One embodiment includes a method for generating animation from audio. The method includes steps for receiving input audio data, generating an embedding for the input audio data, and generating several predictions for several tasks from the generated embedding. The several predictions includes at least one of blendshape weights, event detection, and/or voice activity detection. The method includes steps for generating a final prediction from the several predictions, where the final prediction includes a set of blendshape weights, and generating an output based on the generated final prediction.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: January 10, 2023
    Assignee: LoomAi, Inc.
    Inventors: Chong Shang, Eloi Henri Homere Du Bois, Inaki Navarro, Will Welch, Rishabh Battulwar, Ian Sachs, Vivek Verma, Kiran Bhat
  • Publication number: 20220270314
    Abstract: Implementations described herein relate to methods, systems, and computer-readable media to generate animations for a 3D avatar from input video captured at a client device. A camera may capture video of a face while a trained face detection model and a trained regression model output a set of FACS weights, head poses, and facial landmarks to be translated into the animations of the 3D avatar. Additionally, a higher level-of-detail may be intelligently selected based upon user preferences and/or computing conditions at the client device.
    Type: Application
    Filed: February 22, 2022
    Publication date: August 25, 2022
    Applicant: Roblox Corporation
    Inventors: Inaki NAVARRO, Dario KNEUBUHLER, Tijmen VERHULSDONCK, Eloi DU BOIS, Will WELCH, Vivek VERMA, Ian SACHS, Kiran BHAT
  • Publication number: 20210027511
    Abstract: Systems and methods for animating from audio in accordance with embodiments of the invention are illustrated. One embodiment includes a method for generating animation from audio. The method includes steps for receiving input audio data, generating an embedding for the input audio data, and generating several predictions for several tasks from the generated embedding. The several predictions includes at least one of blendshape weights, event detection, and/or voice activity detection. The method includes steps for generating a final prediction from the several predictions, where the final prediction includes a set of blendshape weights, and generating an output based on the generated final prediction.
    Type: Application
    Filed: July 23, 2020
    Publication date: January 28, 2021
    Applicant: LoomAi, Inc.
    Inventors: Chong Shang, Eloi Henri Homere Du Bois, Inaki Navarro, Will Welch, Rishabh Battulwar, Ian Sachs, Vivek Verma, Kiran Bhat
  • Patent number: 10559111
    Abstract: System and methods for computer animations of 3D models of heads generated from images of faces is disclosed. A 2D captured image that includes an image of a face can be received and used to generate a static 3D model of a head. A rig can be fit to the static 3D model to generate an animation-ready 3D generative model. Sets of rigs can be parameters that each map to particular sounds or particular facial movement observed in a video. These mappings can be used to generate a playlists of sets of rig parameters based upon received audio or video content. The playlist may be played in synchronization with an audio rendition of the audio content. Methods can receive a captured image, identify taxonomy attributes from the captured image, select a template model for the captured image, and perform a shape solve for the selected template model based on the identified taxonomy attributes.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: February 11, 2020
    Assignee: LoomAi, Inc.
    Inventors: Ian Sachs, Kiran Bhat, Dominic Monn, Senthil Radhakrishnan, Will Welch
  • Publication number: 20190122411
    Abstract: System and methods for computer animations of 3D models of heads generated from images of faces is disclosed. A 2D captured image that includes an image of a face can be received and used to generate a static 3D model of a head. A rig can be fit to the static 3D model to generate an animation-ready 3D generative model. Sets of rigs can be parameters that each map to particular sounds or particular facial movement observed in a video. These mappings can be used to generate a playlists of sets of rig parameters based upon received audio or video content. The playlist may be played in synchronization with an audio rendition of the audio content. Methods can receive a captured image, identify taxonomy attributes from the captured image, select a template model for the captured image, and perform a shape solve for the selected template model based on the identified taxonomy attributes.
    Type: Application
    Filed: December 14, 2018
    Publication date: April 25, 2019
    Applicant: LoomAi, Inc.
    Inventors: Ian Sachs, Kiran Bhat, Dominic Monn, Senthil Radhakrishnan, Will Welch
  • Patent number: 10198845
    Abstract: Systems and methods for animating expressions of 3D models from captured images of a user's face in accordance with various embodiments of the invention are disclosed. In many embodiments, expressions are identified based on landmarks from images of a user's face. In certain embodiments, weights for morph targets of a 3D model are calculated based on identified landmarks and/or weights for predefined facial expressions to animate expressions for the 3D model.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: February 5, 2019
    Assignee: LoomAi, Inc.
    Inventors: Kiran Bhat, Mahesh Ramasubramanian, Michael Palleschi, Andrew A. Johnson, Ian Sachs
  • Patent number: 9978003
    Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.
    Type: Grant
    Filed: August 17, 2017
    Date of Patent: May 22, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
  • Publication number: 20170344860
    Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.
    Type: Application
    Filed: August 17, 2017
    Publication date: November 30, 2017
    Inventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
  • Patent number: 9773196
    Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.
    Type: Grant
    Filed: January 25, 2016
    Date of Patent: September 26, 2017
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
  • Publication number: 20170213112
    Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.
    Type: Application
    Filed: January 25, 2016
    Publication date: July 27, 2017
    Inventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
  • Patent number: 9576351
    Abstract: Techniques are disclosed for automatically transferring a style of at least two reference images to an input image. The resulting transformation of the input image matches the visual styles of the reference images without changing the identity of the subject of the input image. Each image is decomposed into levels of detail with corresponding energy levels and a residual. A style transfer operation is performed at each energy level and residual using the reference image that most closely matches the input image at each energy level. The transformations of each level of detail and the residual of the input image are aggregated to generate an output image having the styles of the reference images. In some cases, the transformations are performed on the foreground of the input image, and the background can be transformed by an amount that is proportional to the aggregated transformations of the foreground.
    Type: Grant
    Filed: November 19, 2015
    Date of Patent: February 21, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Ronen Barzel, Sylvain Paris, Robert Bailey, Ian Sachs