Patents by Inventor Edward Bradley

Edward Bradley has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230154089
    Abstract: A technique for generating a sequence of geometries includes converting, via an encoder neural network, one or more input geometries corresponding to one or more frames within an animation into one or more latent vectors. The technique also includes generating the sequence of geometries corresponding to a sequence of frames within the animation based on the one or more latent vectors. The technique further includes causing output related to the animation to be generated based on the sequence of geometries.
    Type: Application
    Filed: November 15, 2021
    Publication date: May 18, 2023
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Gaspard ZOSS
  • Publication number: 20230154090
    Abstract: A technique for rendering an input geometry includes generating a first segmentation mask for a first input geometry and a first set of texture maps associated with one or more portions of the first input geometry. The technique also includes generating, via one or more neural networks, a first set of neural textures for the one or more portions of the first input geometry. The technique further includes rendering a first image corresponding to the first input geometry based on the first segmentation mask, the first set of texture maps, and the first set of neural textures.
    Type: Application
    Filed: November 15, 2021
    Publication date: May 18, 2023
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Gaspard ZOSS
  • Publication number: 20230154101
    Abstract: Techniques are disclosed for generating photorealistic images of objects, such as heads, from multiple viewpoints. In some embodiments, a morphable radiance field (MoRF) model that generates images of heads includes an identity model that maps an identifier (ID) code associated with a head into two codes: a deformation ID code encoding a geometric deformation from a canonical head geometry, and a canonical ID code encoding a canonical appearance within a shape-normalized space. The MoRF model also includes a deformation field model that maps a world space position to a shape-normalized space position based on the deformation ID code. Further, the MoRF model includes a canonical neural radiance field (NeRF) model that includes a density multi-layer perceptron (MLP) branch, a diffuse MLP branch, and a specular MLP branch that output densities, diffuse colors, and specular colors, respectively. The MoRF model can be used to render images of heads from various viewpoints.
    Type: Application
    Filed: November 8, 2022
    Publication date: May 18, 2023
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Daoye WANG, Gaspard ZOSS
  • Patent number: 11645813
    Abstract: Techniques are disclosed for creating digital faces. In some examples, an anatomical face model is generated from a data set including captured facial geometries of different individuals and associated bone geometries. A model generator segments each of the captured facial geometries into patches, compresses the segmented geometry associated with each patch to determine local deformation subspaces of the anatomical face model, and determines corresponding compressed anatomical subspaces of the anatomical face model. A sculpting application determines, based on sculpting input from a user, constraints for an optimization to determine parameter values associated with the anatomical face model. The parameter values can be used, along with the anatomical face model, to generate facial geometry that reflects the sculpting input.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: May 9, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Aurel Gruber, Marco Fratarcangeli, Derek Edward Bradley, Gaspard Zoss, Dominik Thabo Beeler
  • Publication number: 20230124117
    Abstract: One embodiment of the present invention sets forth a technique for performing appearance capture. The technique includes receiving a first sequence of images of an object, wherein the first sequence of images includes a first set of images interleaved with a second set of images, and wherein the first set of images is captured based on illumination of the object using a first lighting pattern and the second set of images is captured based on illumination of the object using one or more lighting patterns that are different from the first lighting pattern. The technique also includes generating a first set of appearance parameters associated with the object based on a first inverse rendering associated with the first sequence of images.
    Type: Application
    Filed: October 20, 2021
    Publication date: April 20, 2023
    Inventors: Paulo Fabiano URNAU GOTARDO, Derek Edward BRADLEY, Jérémy RIVIERE
  • Publication number: 20230104702
    Abstract: A technique for synthesizing a shape includes generating a first plurality of offset tokens based on a first shape code and a first plurality of position tokens, wherein the first shape code represents a variation of a canonical shape, and wherein the first plurality of position tokens represent a first plurality of positions on the canonical shape. The technique also includes generating a first plurality of offsets associated with the first plurality of positions on the canonical shape based on the first plurality of offset tokens. The technique further includes generating the shape based on the first plurality of offsets and the first plurality of positions.
    Type: Application
    Filed: February 18, 2022
    Publication date: April 6, 2023
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Gaspard ZOSS
  • Publication number: 20230080639
    Abstract: Techniques are disclosed for re-aging images of faces and three-dimensional (3D) geometry representing faces. In some embodiments, an image of a face, an input age, and a target age, are input into a re-aging model, which outputs a re-aging delta image that can be combined with the input image to generate a re-aged image of the face. In some embodiments, 3D geometry representing a face is re-aged using local 3D re-aging models that each include a blendshape model for finding a linear combination of sample patches from geometries of different facial identities and generating a new shape for the patch at a target age based on the linear combination. In some embodiments, 3D geometry representing a face is re-aged by performing a shape-from-shading technique using re-aged images of the face captured from different viewpoints, which can optionally be constrained to linear combinations of sample patches from local blendshape models.
    Type: Application
    Filed: September 13, 2021
    Publication date: March 16, 2023
    Inventors: Gaspard ZOSS, Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Eftychios SIFAKIS
  • Publication number: 20230065700
    Abstract: A modeling engine generates a prediction model that quantifies and predicts secondary dynamics associated with the face of a performer enacting a performance. The modeling engine generates a set of geometric representations that represents the face of the performer enacting different facial expressions under a range of loading conditions. For a given facial expression and specific loading condition, the modeling engine trains a Machine Learning model to predict how soft tissue regions of the face of the performer change in response to external forces applied to the performer during the performance. The modeling engine combines different expression models associated with different facial expressions to generate a prediction model. The prediction model can be used to predict and remove secondary dynamics from a given geometric representation of a performance or to generate and add secondary dynamics to a given geometric representation of a performance.
    Type: Application
    Filed: October 11, 2022
    Publication date: March 2, 2023
    Inventors: Dominik Thabo BEELER, Derek Edward BRADLEY, Eftychios Dimitrios SIFAKIS, Gaspard ZOSS
  • Patent number: 11587276
    Abstract: A modeling engine generates a prediction model that quantifies and predicts secondary dynamics associated with the face of a performer enacting a performance. The modeling engine generates a set of geometric representations that represents the face of the performer enacting different facial expressions under a range of loading conditions. For a given facial expression and specific loading condition, the modeling engine trains a Machine Learning model to predict how soft tissue regions of the face of the performer change in response to external forces applied to the performer during the performance. The modeling engine combines different expression models associated with different facial expressions to generate a prediction model. The prediction model can be used to predict and remove secondary dynamics from a given geometric representation of a performance or to generate and add secondary dynamics to a given geometric representation of a performance.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: February 21, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Eftychios Dimitrios Sifakis, Gaspard Zoss
  • Publication number: 20220374649
    Abstract: Various embodiments set forth systems and techniques for changing a face within an image. The techniques include receiving a first image including a face associated with a first facial identity; generating, via a machine learning model, at least a first texture map and a first position map based on the first image; rendering a second image including a face associated with a second facial identity based on the first texture map and the first position map, wherein the second facial identity is different from the first facial identity.
    Type: Application
    Filed: September 24, 2021
    Publication date: November 24, 2022
    Inventors: Jacek Krzysztof NARUNIEC, Derek Edward BRADLEY, Paulo Fabiano Urnau GOTARDO, Leonhard Markus HELMINGER, Christopher Andreas OTTO, Christopher Richard SCHROERS, Romann Matthew WEBER
  • Patent number: 11501486
    Abstract: A system for characterising surfaces in a real-world scene, the system comprising an object identification unit operable to identify one or more objects within one or more captured images of the real-world scene, a characteristic identification unit operable to identify one or more characteristics of one or more surfaces of the identified objects, and an information generation unit operable to generate information linking an object and one or more surface characteristics associated with that object.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: November 15, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Nigel John Williams, Fabio Cappello, Timothy Edward Bradley, Rajeev Gupta
  • Publication number: 20220327717
    Abstract: Some implementations of the disclosure are directed to capturing facial training data for one or more subjects, the captured facial training data including each of the one or more subject's facial skin geometry tracked over a plurality of times and the subject's corresponding jaw poses for each of those plurality of times; and using the captured facial training data to create a model that provides a mapping from skin motion to jaw motion. Additional implementations of the disclosure are directed to determining a facial skin geometry of a subject; using a model that provides a mapping from skin motion to jaw motion to predict a motion of the subject's jaw from a rest pose given the facial skin geometry; and determining a jaw pose of the subject using the predicted motion of the subject's jaw.
    Type: Application
    Filed: June 28, 2022
    Publication date: October 13, 2022
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Gaspard Zoss
  • Publication number: 20220309740
    Abstract: An image rendering method for rendering a pixel at a viewpoint includes, for a first element of a virtual scene, having a predetermined surface at a position within that scene; providing the position and a direction based on the viewpoint to a machine learning system previously trained to predict a factor that, when combined with a distribution function that characterises an interaction of light with the predetermined surface, generates a pixel value corresponding to the first element of the virtual scene as illuminated at the position, combining the predicted factor from the machine learning system with the distribution function to generate the pixel value corresponding to the illuminated first element of the virtual scene at the position, and incorporating the pixel value into a rendered image for display, where the machine learning system was previously trained with a training set based on images comprising multiple lighting conditions.
    Type: Application
    Filed: March 18, 2022
    Publication date: September 29, 2022
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Fabio Cappello, Matthew Sanders, Marina Villanueva Barreiro, Timothy Edward Bradley, Andrew James Bigos
  • Publication number: 20220309741
    Abstract: An image rendering method for an entertainment device for rendering a pixel at a viewpoint includes: for a first element of a virtual scene, having a predetermined surface at a position within that scene, obtaining a machine learning system previously trained to predict a factor that, when combined with a distribution function that characterises an interaction of light with the predetermined surface, generates a pixel value corresponding to the first element of the virtual scene as illuminated at the position; providing the position and a direction based on the viewpoint to the machine learning system; combining the predicted factor from the machine learning system with the distribution function to generate the pixel value corresponding to the illuminated first element of the virtual scene at the position; and incorporating the pixel value into a rendered image for display; where the obtaining step comprises: identifying a current or anticipated state of an application determining the virtual scene to be rend
    Type: Application
    Filed: March 21, 2022
    Publication date: September 29, 2022
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Marina Villanueva Barreiro, Andrew James Bigos, Gilles Christian Rainer, Fabio Cappello, Timothy Edward Bradley
  • Publication number: 20220301348
    Abstract: Embodiment of the present invention sets forth techniques for performing face reconstruction. The techniques include generating an identity mesh based on an identity encoding that represents an identity associated with a face in one or more images. The techniques also include generating an expression mesh based on an expression encoding that represents an expression associated with the face in the one or more images. The techniques also include generating, by a machine learning model, an output mesh of the face based on the identity mesh and the expression mesh.
    Type: Application
    Filed: March 17, 2022
    Publication date: September 22, 2022
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Simone FOTI, Paulo Fabiano URNAU GOTARDO, Gaspard ZOSS
  • Publication number: 20220237751
    Abstract: Techniques are disclosed for generating photorealistic images of head portraits. A rendering application renders a set of images that include the skin of a face and corresponding masks indicating pixels associated with the skin in the images. An inpainting application performs a neural projection technique to optimize a set of parameters that, when input into a generator model, produces a set of projection images, each of which includes a head portrait in which (1) skin regions resemble the skin regions of the face in a corresponding rendered image; and (2) non-skin regions match the non-skin regions in the other projection images when the rendered set of images are standalone images, or transition smoothly between consecutive projection images in the case when the rendered set of images are frames of a video. The rendered images can then be blended with corresponding projection images to generate composite images that are photorealistic.
    Type: Application
    Filed: November 29, 2021
    Publication date: July 28, 2022
    Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Jeremy RIVIERE, Sebastian Valentin WINBERG, Gaspard ZOSS
  • Patent number: 11393107
    Abstract: Some implementations of the disclosure are directed to capturing facial training data for one or more subjects, the captured facial training data including each of the one or more subject's facial skin geometry tracked over a plurality of times and the subject's corresponding jaw poses for each of those plurality of times; and using the captured facial training data to create a model that provides a mapping from skin motion to jaw motion. Additional implementations of the disclosure are directed to determining a facial skin geometry of a subject; using a model that provides a mapping from skin motion to jaw motion to predict a motion of the subject's jaw from a rest pose given the facial skin geometry; and determining a jaw pose of the subject using the predicted motion of the subject's jaw.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: July 19, 2022
    Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Gaspard Zoss
  • Publication number: 20220156987
    Abstract: A technique for performing style transfer between a content sample and a style sample is disclosed. The technique includes applying one or more neural network layers to a first latent representation of the style sample to generate one or more convolutional kernels. The technique also includes generating convolutional output by convolving a second latent representation of the content sample with the one or more convolutional kernels. The technique further includes applying one or more decoder layers to the convolutional output to produce a style transfer result that comprises one or more content-based attributes of the content sample and one or more style-based attributes of the style sample.
    Type: Application
    Filed: April 6, 2021
    Publication date: May 19, 2022
    Inventors: Prashanth CHANDRAN, Derek Edward BRADLEY, Paulo Fabiano URNAU GOTARDO, Gaspard ZOSS
  • Publication number: 20220092293
    Abstract: Techniques are disclosed for capturing facial appearance properties. In some examples, a facial capture system includes light source(s) that produce linearly polarized light, at least one camera that is cross-polarized with respect to the polarization of light produced by the light source(s), and at least one other camera that is not cross-polarized with respect to the polarization of the light produced by the light source(s). Images captured by the cross-polarized camera(s) are used to determine facial appearance properties other than specular intensity, such as diffuse albedo, while images captured by the camera(s) that are not cross-polarized are used to determine facial appearance properties including specular intensity. In addition, a coarse-to-fine optimization procedure is disclosed for determining appearance and detailed geometry maps based on images captured by the cross-polarized camera(s) and the camera(s) that are not cross-polarized.
    Type: Application
    Filed: December 2, 2021
    Publication date: March 24, 2022
    Inventors: Jeremy RIVIERE, Paulo Fabiano URNAU GOTARDO, Abhijeet GHOSH, Derek Edward BRADLEY, Dominik Thabo BEELER
  • Publication number: 20220092846
    Abstract: A data processing apparatus includes input circuitry to receive viewpoint data indicative of respective viewpoints for a plurality of spectators of a virtual environment, detection circuitry to detect a portion of the virtual environment viewed by each of the respective viewpoints in dependence upon the viewpoint data, selection circuitry to select one or more regions of the virtual environment in dependence upon at least some of the detected portions, and output circuitry to output data indicative of one or more of the selected regions.
    Type: Application
    Filed: September 9, 2021
    Publication date: March 24, 2022
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Maria Chiara Monti, Fabio Cappello, Matthew Sanders, Timothy Edward Bradley, Oliver Hume