Patents by Inventor Prashanth Chandran

Prashanth Chandran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250118102
    Abstract: One embodiment of the present invention sets forth a technique for performing landmark detection. The technique includes generating, via execution of a first machine learning model, a first set of displacements associated with a first set of query points on a canonical shape based on a first annotation style associated with the first set of query points. The technique also includes determining, via execution of a second machine learning model, a first set of landmarks on a first face depicted in a first image based on the first set of displacements. The technique further includes training the first machine learning model based on one or more losses associated with the first set of landmarks to generate a first trained machine learning model.
    Type: Application
    Filed: October 4, 2024
    Publication date: April 10, 2025
    Inventors: Prashanth CHANDRAN, Gaspard ZOSS, Derek Edward BRADLEY
  • Publication number: 20250118025
    Abstract: One embodiment of the present invention sets forth a technique for performing landmark detection. The technique includes determining a first set of parameters associated with a depiction of a first face in a first image. The technique also includes generating, via execution of a first machine learning model, a first set of three-dimensional (3D) landmarks on the first face based on the first set of parameters, and projecting, based on the first set of parameters, the first set of 3D landmarks onto the first image to generate a first set of two-dimensional (2D) landmarks. The technique further includes training the first machine learning model based on one or more losses associated with the first set of 2D landmarks to generate a first trained machine learning model.
    Type: Application
    Filed: October 4, 2024
    Publication date: April 10, 2025
    Inventors: Prashanth CHANDRAN, Gaspard ZOSS, Derek Edward BRADLEY
  • Publication number: 20250118103
    Abstract: One embodiment of the present invention sets forth a technique for performing landmark detection. The technique includes applying, via execution of a first machine learning model, a first transformation to a first image depicting a first face to generate a second image. The technique also includes determining, via execution of a second machine learning model, a first set of landmarks on the first face based on the second image. The technique further includes training the first machine learning model based on one or more losses associated with the first set of landmarks to generate a first trained machine learning model.
    Type: Application
    Filed: October 4, 2024
    Publication date: April 10, 2025
    Inventors: Prashanth CHANDRAN, Gaspard ZOSS, Derek Edward BRADLEY
  • Publication number: 20250118027
    Abstract: The present invention sets forth a technique for performing face micro detail recovery. The technique includes generating one or more skin texture displacement maps based on images of one or more skin surfaces. The technique also includes transferring, via one or more machine learning models, stylistic elements included in the one or more skin texture displacement maps onto one or more regions included in a modified three-dimensional (3D) facial reconstruction. The technique further includes generating a final 3D facial reconstruction that includes structural elements included in the 3D facial reconstruction and the stylistic elements included in the one or more skin texture displacement maps.
    Type: Application
    Filed: October 4, 2024
    Publication date: April 10, 2025
    Inventors: Derek Edward BRADLEY, Sebastian Klaus WEISS, Prashanth CHANDRAN, Gaspard ZOSS, Jackson Reed STANHOPE
  • Patent number: 12243349
    Abstract: Embodiment of the present invention sets forth techniques for performing face reconstruction. The techniques include generating an identity mesh based on an identity encoding that represents an identity associated with a face in one or more images. The techniques also include generating an expression mesh based on an expression encoding that represents an expression associated with the face in the one or more images. The techniques also include generating, by a machine learning model, an output mesh of the face based on the identity mesh and the expression mesh.
    Type: Grant
    Filed: March 17, 2022
    Date of Patent: March 4, 2025
    Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Derek Edward Bradley, Prashanth Chandran, Simone Foti, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
  • Patent number: 12243140
    Abstract: A technique for rendering an input geometry includes generating a first segmentation mask for a first input geometry and a first set of texture maps associated with one or more portions of the first input geometry. The technique also includes generating, via one or more neural networks, a first set of neural textures for the one or more portions of the first input geometry. The technique further includes rendering a first image corresponding to the first input geometry based on the first segmentation mask, the first set of texture maps, and the first set of neural textures.
    Type: Grant
    Filed: November 15, 2021
    Date of Patent: March 4, 2025
    Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Derek Edward Bradley, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
  • Patent number: 12236517
    Abstract: Techniques are disclosed for generating photorealistic images of objects, such as heads, from multiple viewpoints. In some embodiments, a morphable radiance field (MoRF) model that generates images of heads includes an identity model that maps an identifier (ID) code associated with a head into two codes: a deformation ID code encoding a geometric deformation from a canonical head geometry, and a canonical ID code encoding a canonical appearance within a shape-normalized space. The MoRF model also includes a deformation field model that maps a world space position to a shape-normalized space position based on the deformation ID code. Further, the MoRF model includes a canonical neural radiance field (NeRF) model that includes a density multi-layer perceptron (MLP) branch, a diffuse MLP branch, and a specular MLP branch that output densities, diffuse colors, and specular colors, respectively. The MoRF model can be used to render images of heads from various viewpoints.
    Type: Grant
    Filed: November 8, 2022
    Date of Patent: February 25, 2025
    Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Derek Edward Bradley, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Daoye Wang, Gaspard Zoss
  • Publication number: 20250037375
    Abstract: One embodiment of the present invention sets forth a technique for fitting a shape model for an object to a set of constraints associated with a target shape. The technique includes determining, based on the set of constraints, one or more ground truth positions of one or more points on the target shape. The technique also includes generating, via execution of a set of neural networks, a set of fitting parameters associated with the point(s) and computing, via the shape model, one or more predicted positions of the point(s) based on the set of fitting parameters. The technique further includes training the set of neural networks based on one or more losses associated with the predicted position(s) and the ground truth position(s) and generating, via execution of the trained set of neural networks, a three-dimensional (3D) model corresponding to the target shape.
    Type: Application
    Filed: July 22, 2024
    Publication date: January 30, 2025
    Inventors: Gaspard ZOSS, Prashanth CHANDRAN
  • Publication number: 20250037341
    Abstract: The present invention sets forth a technique for performing facial rig generation. The technique includes generating a blendshape model including a plurality of vertices, a plurality of meshes, and a plurality of patches. The technique also includes modifying one or more blendweight values associated with each of the plurality of patches based on a plurality of facial depictions included in a facial database and one or more sample depictions of a target character and generating an output facial rig model based on the blendshape model and the one or more modified blendweight values. The technique further includes generating one or more expressive depictions of the target character based at least on the output facial rig.
    Type: Application
    Filed: July 26, 2024
    Publication date: January 30, 2025
    Inventors: Prashanth CHANDRAN, Gaspard ZOSS, Derek Edward BRADLEY, Josefine Estrid KLINTBERG, Paulo Fabiano URNAU GOTARDO
  • Publication number: 20250037366
    Abstract: One embodiment of the present invention sets forth a technique for generating a shape model. The technique includes generating, via execution of a set of neural networks based on a plurality of shapes associated with an object, a set of attributes associated with a set of anatomical constraints for the object. The technique also includes computing, based on the set of attributes, a set of positions of a set of points on the object. The technique further includes generating a three-dimensional (3D) model of the object based on the set of positions of the set of points.
    Type: Application
    Filed: July 22, 2024
    Publication date: January 30, 2025
    Inventors: Gaspard ZOSS, Prashanth CHANDRAN
  • Patent number: 12205213
    Abstract: A technique for rendering an input geometry includes generating a first segmentation mask for a first input geometry and a first set of texture maps associated with one or more portions of the first input geometry. The technique also includes generating, via one or more neural networks, a first set of neural textures for the one or more portions of the first input geometry. The technique further includes rendering a first image corresponding to the first input geometry based on the first segmentation mask, the first set of texture maps, and the first set of neural textures.
    Type: Grant
    Filed: November 15, 2021
    Date of Patent: January 21, 2025
    Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Derek Edward Bradley, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
  • Patent number: 12198225
    Abstract: A technique for synthesizing a shape includes generating a first plurality of offset tokens based on a first shape code and a first plurality of position tokens, wherein the first shape code represents a variation of a canonical shape, and wherein the first plurality of position tokens represent a first plurality of positions on the canonical shape. The technique also includes generating a first plurality of offsets associated with the first plurality of positions on the canonical shape based on the first plurality of offset tokens. The technique further includes generating the shape based on the first plurality of offsets and the first plurality of positions.
    Type: Grant
    Filed: February 18, 2022
    Date of Patent: January 14, 2025
    Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Derek Edward Bradley, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
  • Publication number: 20240346734
    Abstract: The present invention sets forth a technique for simulating wrinkles under dynamic facial expression. The technique includes receiving a wrinkle graph, including a plurality of nodes associated with a plurality of pores included in a three-dimensional (3D) representation of a facial structure and a plurality of edges associated with a plurality of wrinkles included in the 3D representation of a facial structure. The technique also includes assigning one or more of the plurality of wrinkles associated with edges in the wrinkle graph to one of a plurality of bins and generating, for each of the bins, a plurality of pre-computed displacement texture maps. The technique further includes generating a per-frame displacement texture map and modifying an animation frame based on the per-frame displacement texture map, such that the modified animation frame depicts the plurality of pores and the plurality of wrinkles included in the 3D representation of the facial structure.
    Type: Application
    Filed: April 5, 2024
    Publication date: October 17, 2024
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Sebastian Klaus WEISS, Gaspard ZOSS
  • Publication number: 20240346733
    Abstract: The present invention sets forth a technique for simulating wrinkles under dynamic facial expression. This technique includes sampling a plurality of nodes from a three-dimensional (3D) representation of a facial structure, wherein each node represents a pore in the facial structure. The technique also generates one or more edges, with each of the one or more edges connecting a node of the plurality of nodes to a different node selected from the plurality of nodes. The technique further generates a wrinkle graph comprising the plurality of nodes, the one or more edges, and a plurality of edge weights associated with the edges included in the wrinkle graph. The technique may also modify the 3D representation of the facial structure based on the wrinkle graph and one or more dynamic expressions associated with the 3D representation.
    Type: Application
    Filed: April 5, 2024
    Publication date: October 17, 2024
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Sebastian Klaus WEISS, Gaspard ZOSS
  • Publication number: 20240303983
    Abstract: One embodiment of the present invention sets forth a technique for evaluating three-dimensional (3D) reconstructions. The technique includes generating a 3D reconstruction of an object based on one or more mesh parameters. The technique also includes generating, based on the 3D reconstruction, a 3D rendering of the object. The technique further includes generating, using a machine learning model, a perceptual score associated with the 3D rendering and an input image of the object. The generated score represents how closely the 3D rendering matches the input image.
    Type: Application
    Filed: March 7, 2024
    Publication date: September 12, 2024
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Christopher Andreas OTTO, Gaspard ZOSS
  • Publication number: 20240249459
    Abstract: One embodiment of the present invention sets forth a technique for retargeting a facial expression to a different facial identity. The technique includes generating, based on an input target facial identity, a facial identity code in an input identity latent space. The technique further includes converting a spatial input point from an input facial identity space of the input target facial identity to a canonical-space point in a canonical space. The technique still further includes generating one or more canonical simulator control values based on the facial identity code, an input source facial expression, and the canonical-space point. The technique still further includes generating a simulated active soft body based on one or more identity-specific control values, wherein each identity-specific control value corresponds to one or more of the canonical simulator control values and is in an output facial identity space associated with an output target facial identity.
    Type: Application
    Filed: January 24, 2024
    Publication date: July 25, 2024
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Eftychios Dimitrios SIFAKIS, Barbara SOLENTHALER, Paulo Fabiano URNAU GOTARDO, Lingchen YANG, Gaspard ZOSS
  • Publication number: 20240161391
    Abstract: The present invention sets forth a technique for generating two-dimensional (2D) renderings of a three-dimensional (3D) scene from an arbitrary camera position under arbitrary lighting conditions. This technique includes determining, based on a plurality of 2D representations of a 3D scene, a radiance field function for a neural radiance field (NeRF) model. This technique further includes determining, based on a plurality of 2D representations of a 3D scene, a radiance field function for a “one light at a time” (OLAT) model. The technique further includes rendering a 2D representation of the scene based on a given camera position and illumination data. The technique further includes computing a rendering loss based on the difference between the rendered 2D representation and an associated one of the plurality of 2D representations of the scene. The technique further includes modifying at least one of the NeRF and OLAT models based on the rendering loss.
    Type: Application
    Filed: November 8, 2023
    Publication date: May 16, 2024
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Yingyan XU, Gaspard ZOSS
  • Publication number: 20240161540
    Abstract: One or more embodiments comprise a computer-implemented method that includes receiving an input image including one or more facial representations and a set of points on a 3D canonical shape, wherein the set of points are selectable at runtime, extracting a set of features from the input image that represent at least one facial representation included in the one or more facial representations, and determining a set of landmarks on the at least one facial representation based on the set of features and the set of points, wherein each landmark in the set of landmarks is associated with at least one point in the set of points.
    Type: Application
    Filed: November 8, 2023
    Publication date: May 16, 2024
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Gaspard ZOSS
  • Patent number: 11836860
    Abstract: Methods and systems for performing facial retargeting using a patch-based technique are disclosed. One or more three-dimensional (3D) representations of a source character's (e.g., a human actor's) face can be transferred onto one or more corresponding representations of a target character's (e.g., a cartoon character's) face, enabling filmmakers to transfer a performance by a source character to a target character. The source character's 3D facial shape can separated into patches. For each patch, a patch combination (representing that patch as a combination of source reference patches) can be determined. The patch combinations and target reference patches can then be used to create target patches corresponding to the target character. The target patches can be combined using an anatomical local model solver to produce a 3D facial shape corresponding to the target character, effectively transferring a facial performance by the source character to the target character.
    Type: Grant
    Filed: January 27, 2022
    Date of Patent: December 5, 2023
    Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Prashanth Chandran, Loïc Florian Ciccone, Derek Edward Bradley
  • Publication number: 20230260186
    Abstract: Embodiments of the present disclosure are directed to methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Methods accord to embodiments may be useful for performing facial capture on subjects with dense facial hair. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject’s face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).
    Type: Application
    Filed: January 27, 2023
    Publication date: August 17, 2023
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Sebastian Winberg, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss, Derek Edward Bradley