Patents by Inventor Prashanth Chandran

Prashanth Chandran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11836860
    Abstract: Methods and systems for performing facial retargeting using a patch-based technique are disclosed. One or more three-dimensional (3D) representations of a source character's (e.g., a human actor's) face can be transferred onto one or more corresponding representations of a target character's (e.g., a cartoon character's) face, enabling filmmakers to transfer a performance by a source character to a target character. The source character's 3D facial shape can separated into patches. For each patch, a patch combination (representing that patch as a combination of source reference patches) can be determined. The patch combinations and target reference patches can then be used to create target patches corresponding to the target character. The target patches can be combined using an anatomical local model solver to produce a 3D facial shape corresponding to the target character, effectively transferring a facial performance by the source character to the target character.
    Type: Grant
    Filed: January 27, 2022
    Date of Patent: December 5, 2023
    Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Prashanth Chandran, Loïc Florian Ciccone, Derek Edward Bradley
  • Publication number: 20230260186
    Abstract: Embodiments of the present disclosure are directed to methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Methods accord to embodiments may be useful for performing facial capture on subjects with dense facial hair. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject’s face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).
    Type: Application
    Filed: January 27, 2023
    Publication date: August 17, 2023
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Sebastian Winberg, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss, Derek Edward Bradley
  • Publication number: 20230252714
    Abstract: One embodiment of the present invention sets forth a technique for performing shape and appearance reconstruction. The technique includes generating a first set of renderings associated with an object based on a set of parameters that represent a reconstruction of the object in a first target image. The technique also includes producing, via a neural network, a first set of corrections associated with at least a portion of the set of parameters based on the first target image and the first set of renderings. The technique further includes generating an updated reconstruction of the object based on the first set of corrections.
    Type: Application
    Filed: February 10, 2022
    Publication date: August 10, 2023
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Christopher Andreas OTTO, Agon SERIFI, Gaspard ZOSS
  • Publication number: 20230237739
    Abstract: Methods and systems for performing facial retargeting using a patch-based technique are disclosed. One or more three-dimensional (3D) representations of a source character's (e.g., a human actor's) face can be transferred onto one or more corresponding representations of a target character's (e.g., a cartoon character's) face, enabling filmmakers to transfer a performance by a source character to a target character. The source character's 3D facial shape can separated into patches. For each patch, a patch combination (representing that patch as a combination of source reference patches) can be determined. The patch combinations and target reference patches can then be used to create target patches corresponding to the target character. The target patches can be combined using an anatomical local model solver to produce a 3D facial shape corresponding to the target character, effectively transferring a facial performance by the source character to the target character.
    Type: Application
    Filed: January 27, 2022
    Publication date: July 27, 2023
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Prashanth Chandran, Loïc Florian Ciccone, Derek Edward Bradley
  • Publication number: 20230237753
    Abstract: Embodiments of the present disclosure are directed to methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Methods accord to embodiments may be useful for performing facial capture on subjects with dense facial hair. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject's face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).
    Type: Application
    Filed: January 27, 2023
    Publication date: July 27, 2023
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Derek Edward Bradley, Paulo Fabiano Urnau Gotardo, Gaspard Zoss, Prashanth Chandran, Sebastian Winberg
  • Publication number: 20230196664
    Abstract: Various embodiments include a system for rendering an object, such as human skin or a human head, from captured appearance data. The system includes a processor executing a near field lighting reconstruction module. The system determines at least one of a three-dimensional (3D) position or a 3D orientation of a lighting unit based on a plurality of captured images of a mirror sphere. For each point light source in a plurality of point light sources included in the lighting unit, the system determines an intensity associated with the point light source. The system determines captures appearance data of the object, where the object is illuminated by the lighting unit. The system renders an image of the object based on the appearance data and the intensities associated with each point light source in the plurality of point light sources.
    Type: Application
    Filed: December 14, 2022
    Publication date: June 22, 2023
    Inventors: Paulo Fabiano URNAU GOTARDO, Derek Edward BRADLEY, Gaspard ZOSS, Jeremy RIVIERE, Prashanth CHANDRAN, Yingyan XU
  • Publication number: 20230196665
    Abstract: Various embodiments include a system for rendering an object, such as human skin or a human head, from captured appearance data comprising a plurality of texels. The system includes a processor executing a texture space indirect illumination module. The system determines texture coordinates of a vector originating from a first texel where the vector intersects a second texel. The system renders the second texel from the viewpoint of the first texel based on appearance data at the second texel. Based on the rendering of the second texel, the system determines an indirect lighting intensity incident to the first texel from the second texel. The system updates appearance data at the first texel based on a direct lighting intensity and the indirect lighting intensity. The system renders the first texel based on the updated appearance data at the first texel.
    Type: Application
    Filed: December 14, 2022
    Publication date: June 22, 2023
    Inventors: Paulo Fabiano URNAU GOTARDO, Derek Edward BRADLEY, Gaspard ZOSS, Jeremy RIVIERE, Prashanth CHANDRAN, Yingyan XU
  • Publication number: 20230154090
    Abstract: A technique for rendering an input geometry includes generating a first segmentation mask for a first input geometry and a first set of texture maps associated with one or more portions of the first input geometry. The technique also includes generating, via one or more neural networks, a first set of neural textures for the one or more portions of the first input geometry. The technique further includes rendering a first image corresponding to the first input geometry based on the first segmentation mask, the first set of texture maps, and the first set of neural textures.
    Type: Application
    Filed: November 15, 2021
    Publication date: May 18, 2023
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Gaspard ZOSS
  • Publication number: 20230154089
    Abstract: A technique for generating a sequence of geometries includes converting, via an encoder neural network, one or more input geometries corresponding to one or more frames within an animation into one or more latent vectors. The technique also includes generating the sequence of geometries corresponding to a sequence of frames within the animation based on the one or more latent vectors. The technique further includes causing output related to the animation to be generated based on the sequence of geometries.
    Type: Application
    Filed: November 15, 2021
    Publication date: May 18, 2023
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Gaspard ZOSS
  • Publication number: 20230154101
    Abstract: Techniques are disclosed for generating photorealistic images of objects, such as heads, from multiple viewpoints. In some embodiments, a morphable radiance field (MoRF) model that generates images of heads includes an identity model that maps an identifier (ID) code associated with a head into two codes: a deformation ID code encoding a geometric deformation from a canonical head geometry, and a canonical ID code encoding a canonical appearance within a shape-normalized space. The MoRF model also includes a deformation field model that maps a world space position to a shape-normalized space position based on the deformation ID code. Further, the MoRF model includes a canonical neural radiance field (NeRF) model that includes a density multi-layer perceptron (MLP) branch, a diffuse MLP branch, and a specular MLP branch that output densities, diffuse colors, and specular colors, respectively. The MoRF model can be used to render images of heads from various viewpoints.
    Type: Application
    Filed: November 8, 2022
    Publication date: May 18, 2023
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Daoye WANG, Gaspard ZOSS
  • Publication number: 20230104702
    Abstract: A technique for synthesizing a shape includes generating a first plurality of offset tokens based on a first shape code and a first plurality of position tokens, wherein the first shape code represents a variation of a canonical shape, and wherein the first plurality of position tokens represent a first plurality of positions on the canonical shape. The technique also includes generating a first plurality of offsets associated with the first plurality of positions on the canonical shape based on the first plurality of offset tokens. The technique further includes generating the shape based on the first plurality of offsets and the first plurality of positions.
    Type: Application
    Filed: February 18, 2022
    Publication date: April 6, 2023
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Gaspard ZOSS
  • Publication number: 20230080639
    Abstract: Techniques are disclosed for re-aging images of faces and three-dimensional (3D) geometry representing faces. In some embodiments, an image of a face, an input age, and a target age, are input into a re-aging model, which outputs a re-aging delta image that can be combined with the input image to generate a re-aged image of the face. In some embodiments, 3D geometry representing a face is re-aged using local 3D re-aging models that each include a blendshape model for finding a linear combination of sample patches from geometries of different facial identities and generating a new shape for the patch at a target age based on the linear combination. In some embodiments, 3D geometry representing a face is re-aged by performing a shape-from-shading technique using re-aged images of the face captured from different viewpoints, which can optionally be constrained to linear combinations of sample patches from local blendshape models.
    Type: Application
    Filed: September 13, 2021
    Publication date: March 16, 2023
    Inventors: Gaspard ZOSS, Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Eftychios SIFAKIS
  • Publication number: 20220301348
    Abstract: Embodiment of the present invention sets forth techniques for performing face reconstruction. The techniques include generating an identity mesh based on an identity encoding that represents an identity associated with a face in one or more images. The techniques also include generating an expression mesh based on an expression encoding that represents an expression associated with the face in the one or more images. The techniques also include generating, by a machine learning model, an output mesh of the face based on the identity mesh and the expression mesh.
    Type: Application
    Filed: March 17, 2022
    Publication date: September 22, 2022
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Simone FOTI, Paulo Fabiano URNAU GOTARDO, Gaspard ZOSS
  • Publication number: 20220237751
    Abstract: Techniques are disclosed for generating photorealistic images of head portraits. A rendering application renders a set of images that include the skin of a face and corresponding masks indicating pixels associated with the skin in the images. An inpainting application performs a neural projection technique to optimize a set of parameters that, when input into a generator model, produces a set of projection images, each of which includes a head portrait in which (1) skin regions resemble the skin regions of the face in a corresponding rendered image; and (2) non-skin regions match the non-skin regions in the other projection images when the rendered set of images are standalone images, or transition smoothly between consecutive projection images in the case when the rendered set of images are frames of a video. The rendered images can then be blended with corresponding projection images to generate composite images that are photorealistic.
    Type: Application
    Filed: November 29, 2021
    Publication date: July 28, 2022
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Jeremy RIVIERE, Sebastian Valentin WINBERG, Gaspard ZOSS
  • Publication number: 20220156987
    Abstract: A technique for performing style transfer between a content sample and a style sample is disclosed. The technique includes applying one or more neural network layers to a first latent representation of the style sample to generate one or more convolutional kernels. The technique also includes generating convolutional output by convolving a second latent representation of the content sample with the one or more convolutional kernels. The technique further includes applying one or more decoder layers to the convolutional output to produce a style transfer result that comprises one or more content-based attributes of the content sample and one or more style-based attributes of the style sample.
    Type: Application
    Filed: April 6, 2021
    Publication date: May 19, 2022
    Inventors: Prashanth CHANDRAN, Derek Edward BRADLEY, Paulo Fabiano URNAU GOTARDO, Gaspard ZOSS
  • Patent number: 11276231
    Abstract: Techniques are disclosed for training and applying nonlinear face models. In embodiments, a nonlinear face model includes an identity encoder, an expression encoder, and a decoder. The identity encoder takes as input a representation of a facial identity, such as a neutral face mesh minus a reference mesh, and outputs a code associated with the facial identity. The expression encoder takes as input a representation of a target expression, such as a set of blendweight values, and outputs a code associated with the target expression. The codes associated with the facial identity and the facial expression can be concatenated and input into the decoder, which outputs a representation of a face having the facial identity and expression. The representation of the face can include vertex displacements for deforming the reference mesh.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: March 15, 2022
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Prashanth Chandran, Dominik Thabo Beeler, Derek Edward Bradley
  • Patent number: 11257276
    Abstract: Techniques are disclosed for generating digital faces. In some examples, a style-based generator receives as inputs initial tensor(s) and style vector(s) corresponding to user-selected semantic attribute styles, such as the desired expression, gender, age, identity, and/or ethnicity of a digital face. The style-based generator is trained to process such inputs and output low-resolution appearance map(s) for the digital face, such as a texture map, a normal map, and/or a specular roughness map. The low-resolution appearance map(s) are further processed using a super-resolution generator that is trained to take the low-resolution appearance map(s) and low-resolution 3D geometry of the digital face as inputs and output high-resolution appearance map(s) that align with high-resolution 3D geometry of the digital face. Such high-resolution appearance map(s) and high-resolution 3D geometry can then be used to render standalone images or the frames of a video that include the digital face.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: February 22, 2022
    Assignees: DISNEY ENTERPRISES, INC., ETH ZURICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Prashanth Chandran, Dominik Thabo Beeler, Derek Edward Bradley
  • Publication number: 20210279956
    Abstract: Techniques are disclosed for training and applying nonlinear face models. In embodiments, a nonlinear face model includes an identity encoder, an expression encoder, and a decoder. The identity encoder takes as input a representation of a facial identity, such as a neutral face mesh minus a reference mesh, and outputs a code associated with the facial identity. The expression encoder takes as input a representation of a target expression, such as a set of blendweight values, and outputs a code associated with the target expression. The codes associated with the facial identity and the facial expression can be concatenated and input into the decoder, which outputs a representation of a face having the facial identity and expression. The representation of the face can include vertex displacements for deforming the reference mesh.
    Type: Application
    Filed: March 4, 2020
    Publication date: September 9, 2021
    Inventors: Prashanth CHANDRAN, Dominik Thabo BEELER, Derek Edward BRADLEY
  • Publication number: 20210279938
    Abstract: Techniques are disclosed for generating digital faces. In some examples, a style-based generator receives as inputs initial tensor(s) and style vector(s) corresponding to user-selected semantic attribute styles, such as the desired expression, gender, age, identity, and/or ethnicity of a digital face. The style-based generator is trained to process such inputs and output low-resolution appearance map(s) for the digital face, such as a texture map, a normal map, and/or a specular roughness map. The low-resolution appearance map(s) are further processed using a super-resolution generator that is trained to take the low-resolution appearance map(s) and low-resolution 3D geometry of the digital face as inputs and output high-resolution appearance map(s) that align with high-resolution 3D geometry of the digital face. Such high-resolution appearance map(s) and high-resolution 3D geometry can then be used to render standalone images or the frames of a video that include the digital face.
    Type: Application
    Filed: June 8, 2020
    Publication date: September 9, 2021
    Inventors: Prashanth CHANDRAN, Dominik Thabo BEELER, Derek Edward BRADLEY
  • Patent number: 10970849
    Abstract: According to one implementation, a pose estimation and body tracking system includes a computing platform having a hardware processor and a system memory storing a software code including a tracking module trained to track motions. The software code receives a series of images of motion by a subject, and for each image, uses the tracking module to determine locations corresponding respectively to two-dimensional (2D) skeletal landmarks of the subject based on constraints imposed by features of a hierarchical skeleton model intersecting at each 2D skeletal landmark. The software code further uses the tracking module to infer joint angles of the subject based on the locations and determine a three-dimensional (3D) pose of the subject based on the locations and the joint angles, resulting in a series of 3D poses. The software code outputs a tracking image corresponding to the motion by the subject based on the series of 3D poses.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: April 6, 2021
    Assignees: Disney Enterprises, Inc., ETH Zürich (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Ahmet Cengiz Öztireli, Prashanth Chandran, Markus Gross