Patents by Inventor Edward Bradley

Edward Bradley has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11276231
    Abstract: Techniques are disclosed for training and applying nonlinear face models. In embodiments, a nonlinear face model includes an identity encoder, an expression encoder, and a decoder. The identity encoder takes as input a representation of a facial identity, such as a neutral face mesh minus a reference mesh, and outputs a code associated with the facial identity. The expression encoder takes as input a representation of a target expression, such as a set of blendweight values, and outputs a code associated with the target expression. The codes associated with the facial identity and the facial expression can be concatenated and input into the decoder, which outputs a representation of a face having the facial identity and expression. The representation of the face can include vertex displacements for deforming the reference mesh.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: March 15, 2022
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Prashanth Chandran, Dominik Thabo Beeler, Derek Edward Bradley
  • Patent number: 11257276
    Abstract: Techniques are disclosed for generating digital faces. In some examples, a style-based generator receives as inputs initial tensor(s) and style vector(s) corresponding to user-selected semantic attribute styles, such as the desired expression, gender, age, identity, and/or ethnicity of a digital face. The style-based generator is trained to process such inputs and output low-resolution appearance map(s) for the digital face, such as a texture map, a normal map, and/or a specular roughness map. The low-resolution appearance map(s) are further processed using a super-resolution generator that is trained to take the low-resolution appearance map(s) and low-resolution 3D geometry of the digital face as inputs and output high-resolution appearance map(s) that align with high-resolution 3D geometry of the digital face. Such high-resolution appearance map(s) and high-resolution 3D geometry can then be used to render standalone images or the frames of a video that include the digital face.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: February 22, 2022
    Assignees: DISNEY ENTERPRISES, INC., ETH ZURICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Prashanth Chandran, Dominik Thabo Beeler, Derek Edward Bradley
  • Patent number: 11222466
    Abstract: Techniques are disclosed for changing the identities of faces in video frames and images. In embodiments, three-dimensional (3D) geometry of a face is used to inform the facial identity change produced by an image-to-image translation model, such as a comb network model. In some embodiments, the model can take a two-dimensional (2D) texture map and/or a 3D displacement map associated with one facial identity as inputs and output another 2D texture map and/or 3D displacement map associated with a different facial identity. The other 2D texture map and/or 3D displacement map can then be used to render an image that includes the different facial identity.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: January 11, 2022
    Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Jacek Krzysztof Naruniec, Derek Edward Bradley, Thomas Etterlin, Paulo Fabiano Urnau Gotardo, Leonhard Markus Helminger, Christopher Richard Schroers, Romann Matthew Weber
  • Publication number: 20220005268
    Abstract: Techniques are disclosed for creating digital faces. In some examples, an anatomical face model is generated from a data set including captured facial geometries of different individuals and associated bone geometries. A model generator segments each of the captured facial geometries into patches, compresses the segmented geometry associated with each patch to determine local deformation subspaces of the anatomical face model, and determines corresponding compressed anatomical subspaces of the anatomical face model. A sculpting application determines, based on sculpting input from a user, constraints for an optimization to determine parameter values associated with the anatomical face model. The parameter values can be used, along with the anatomical face model, to generate facial geometry that reflects the sculpting input.
    Type: Application
    Filed: July 6, 2020
    Publication date: January 6, 2022
    Inventors: Aurel GRUBER, Marco FRATARCANGELI, Derek Edward BRADLEY, Gaspard ZOSS, Dominik Thabo BEELER
  • Publication number: 20220004741
    Abstract: Techniques are disclosed for capturing facial appearance properties. In some examples, a facial capture system includes light source(s) that produce linearly polarized light, at least one camera that is cross-polarized with respect to the polarization of light produced by the light source(s), and at least one other camera that is not cross-polarized with respect to the polarization of the light produced by the light source(s). Images captured by the cross-polarized camera(s) are used to determine facial appearance properties other than specular intensity, such as diffuse albedo, while images captured by the camera(s) that are not cross-polarized are used to determine facial appearance properties including specular intensity. In addition, a coarse-to-fine optimization procedure is disclosed for determining appearance and detailed geometry maps based on images captured by the cross-polarized camera(s) and the camera(s) that are not cross-polarized.
    Type: Application
    Filed: July 2, 2020
    Publication date: January 6, 2022
    Inventors: Jeremy RIVIERE, Paulo Urnau GOTARDO, Abhijeet GHOSH, Derek Edward BRADLEY, Dominik Thabo BEELER
  • Patent number: 11216646
    Abstract: Techniques are disclosed for capturing facial appearance properties. In some examples, a facial capture system includes light source(s) that produce linearly polarized light, at least one camera that is cross-polarized with respect to the polarization of light produced by the light source(s), and at least one other camera that is not cross-polarized with respect to the polarization of the light produced by the light source(s). Images captured by the cross-polarized camera(s) are used to determine facial appearance properties other than specular intensity, such as diffuse albedo, while images captured by the camera(s) that are not cross-polarized are used to determine facial appearance properties including specular intensity. In addition, a coarse-to-fine optimization procedure is disclosed for determining appearance and detailed geometry maps based on images captured by the cross-polarized camera(s) and the camera(s) that are not cross-polarized.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: January 4, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: Jeremy Riviere, Paulo Urnau Gotardo, Abhijeet Ghosh, Derek Edward Bradley, Dominik Thabo Beeler
  • Patent number: 11170550
    Abstract: A retargeting engine automatically performs a retargeting operation. The retargeting engine generates an anatomical local model of a digital character based on performance capture data and/or a 3D model of the digital character. The anatomical local model includes an anatomical model corresponding to internal features of the digital character and a local model corresponding to external features of the digital character. The retargeting engine includes a Machine Learning model that maps a set of locations associated with the face of a performer to a corresponding set of locations associated with the face of the digital character. The retargeting engine includes a solver that modifies a set of parameters associated with the anatomical local model to cause the digital character to exhibit one or more facial expressions enacted by the performer, thereby retargeting those facial expressions onto the digital character.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: November 9, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Derek Edward Bradley, Dominik Thabo Beeler
  • Patent number: 11157350
    Abstract: There is disclosed in an example an interconnect apparatus having: a root circuit; and a downstream circuit comprising at least one receiver; wherein the root circuit is operable to provide a margin test directive to the downstream circuit during a normal operating state; and the downstream circuit is operable to perform a margin test and provide a result report of the margin test to the root circuit. This may be performed in-band, for example in the L0 state. There is also disclosed a system comprising such an interconnect, and a method of performing margin testing.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: October 26, 2021
    Assignee: Intel Corporation
    Inventors: Daniel S. Froelich, Debendra Das Sharma, Fulvio Spagna, Per E. Fornberg, David Edward Bradley
  • Patent number: 11151767
    Abstract: A removal model is trained to predict secondary dynamics associated with an individual enacting a performance. For a given sequence of frames that includes an individual enacting a performance and secondary dynamics, a retargeting application identifies a set of rigid points that correspond to skeletal regions of the individual and a set of non-rigid points that correspond to non-skeletal region of the individual. For each frame in the sequence of frames, the application applies the removal model that takes as inputs a velocity history of a non-rigid point and a velocity history of the rigid points in a temporal window around the frame, and outputs a delta vector for the non-rigid point indicating a displacement for reducing secondary dynamics in the frame. In addition, a trained synthesis model can be applied to determine a delta vector for every non-rigid point indicating displacements for adding new secondary dynamics.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: October 19, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Gaspard Zoss, Eftychios Sifakis, Dominik Thabo Beeler, Derek Edward Bradley
  • Publication number: 20210279938
    Abstract: Techniques are disclosed for generating digital faces. In some examples, a style-based generator receives as inputs initial tensor(s) and style vector(s) corresponding to user-selected semantic attribute styles, such as the desired expression, gender, age, identity, and/or ethnicity of a digital face. The style-based generator is trained to process such inputs and output low-resolution appearance map(s) for the digital face, such as a texture map, a normal map, and/or a specular roughness map. The low-resolution appearance map(s) are further processed using a super-resolution generator that is trained to take the low-resolution appearance map(s) and low-resolution 3D geometry of the digital face as inputs and output high-resolution appearance map(s) that align with high-resolution 3D geometry of the digital face. Such high-resolution appearance map(s) and high-resolution 3D geometry can then be used to render standalone images or the frames of a video that include the digital face.
    Type: Application
    Filed: June 8, 2020
    Publication date: September 9, 2021
    Inventors: Prashanth CHANDRAN, Dominik Thabo BEELER, Derek Edward BRADLEY
  • Publication number: 20210279956
    Abstract: Techniques are disclosed for training and applying nonlinear face models. In embodiments, a nonlinear face model includes an identity encoder, an expression encoder, and a decoder. The identity encoder takes as input a representation of a facial identity, such as a neutral face mesh minus a reference mesh, and outputs a code associated with the facial identity. The expression encoder takes as input a representation of a target expression, such as a set of blendweight values, and outputs a code associated with the target expression. The codes associated with the facial identity and the facial expression can be concatenated and input into the decoder, which outputs a representation of a face having the facial identity and expression. The representation of the face can include vertex displacements for deforming the reference mesh.
    Type: Application
    Filed: March 4, 2020
    Publication date: September 9, 2021
    Inventors: Prashanth CHANDRAN, Dominik Thabo BEELER, Derek Edward BRADLEY
  • Publication number: 20210182039
    Abstract: A data processing apparatus adapted to output recommendation information for modifying source code, includes: compiler circuitry to compile the source code and to output compiled code for the source code, processing circuitry to execute the compiled code, profile circuitry to monitor the execution of the compiled code by the processing circuitry and to generate profile information for the execution of the compiled code, the profile information including one or more statistical properties for the execution of the compiled code, and recommendation circuitry to output the recommendation information for the source code, the recommendation circuitry including a machine learning model to receive at least a portion of the profile information and trained to output the recommendation information for the source code in dependence upon one or more of the statistical properties, in which the recommendation information is indicative of one or more editing instructions for modifying the source code.
    Type: Application
    Filed: December 8, 2020
    Publication date: June 17, 2021
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Fabio Cappello, Gregory James Bedwell, Daryl Cooper, Timothy Edward Bradley, Guy Moss
  • Publication number: 20210166458
    Abstract: A modeling engine generates a prediction model that quantifies and predicts secondary dynamics associated with the face of a performer enacting a performance. The modeling engine generates a set of geometric representations that represents the face of the performer enacting different facial expressions under a range of loading conditions. For a given facial expression and specific loading condition, the modeling engine trains a Machine Learning model to predict how soft tissue regions of the face of the performer change in response to external forces applied to the performer during the performance. The modeling engine combines different expression models associated with different facial expressions to generate a prediction model. The prediction model can be used to predict and remove secondary dynamics from a given geometric representation of a performance or to generate and add secondary dynamics to a given geometric representation of a performance.
    Type: Application
    Filed: December 3, 2019
    Publication date: June 3, 2021
    Inventors: Dominik Thabo BEELER, Derek Edward BRADLEY, Eftychios Dimitrios Sifakis, Gaspard Zoss
  • Publication number: 20210158590
    Abstract: A retargeting engine automatically performs a retargeting operation. The retargeting engine generates an anatomical local model of a digital character based on performance capture data and/or a 3D model of the digital character. The anatomical local model includes an anatomical model corresponding to internal features of the digital character and a local model corresponding to external features of the digital character. The retargeting engine includes a Machine Learning model that maps a set of locations associated with the face of a performer to a corresponding set of locations associated with the face of the digital character. The retargeting engine includes a solver that modifies a set of parameters associated with the anatomical local model to cause the digital character to exhibit one or more facial expressions enacted by the performer, thereby retargeting those facial expressions onto the digital character.
    Type: Application
    Filed: November 26, 2019
    Publication date: May 27, 2021
    Inventors: Derek Edward BRADLEY, Dominik Thabo BEELER
  • Publication number: 20210035354
    Abstract: A system for characterising surfaces in a real-world scene, the system comprising an object identification unit operable to identify one or more objects within one or more captured images of the real-world scene, a characteristic identification unit operable to identify one or more characteristics of one or more surfaces of the identified objects, and an information generation unit operable to generate information linking an object and one or more surface characteristics associated with that object.
    Type: Application
    Filed: July 27, 2020
    Publication date: February 4, 2021
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Nigel John Williams, Fabio Cappello, Timothy Edward Bradley, Rajeev Gupta
  • Publication number: 20210031110
    Abstract: A system for generating video game inputs is provided. The system comprises an input unit operable to obtain images of a passive non-luminous object being held by a user as a video games controller. The system also comprises an object detector and object pose detector for detecting the object and its respective pose in the obtained images. The pose detector is configured to detect the pose of the object based on at least one of a (i) contour detection operation and (ii) the output of a machine learning model that has been trained to detect the poses of passive non-luminous objects in images. A user input generator is configured to generate user inputs based on the detected changes in pose of the passive non-luminous object and to transmit these to a video game unit at which a video game is being executed. A corresponding method is also provided.
    Type: Application
    Filed: July 15, 2020
    Publication date: February 4, 2021
    Applicant: Sony Interactive Entertainment Inc.
    Inventors: Timothy Edward Bradley, David Erwan Damien Uberti
  • Publication number: 20210012512
    Abstract: Some implementations of the disclosure are directed to capturing facial training data for one or more subjects, the captured facial training data including each of the one or more subject's facial skin geometry tracked over a plurality of times and the subject's corresponding jaw poses for each of those plurality of times; and using the captured facial training data to create a model that provides a mapping from skin motion to jaw motion. Additional implementations of the disclosure are directed to determining a facial skin geometry of a subject; using a model that provides a mapping from skin motion to jaw motion to predict a motion of the subject's jaw from a rest pose given the facial skin geometry; and determining a jaw pose of the subject using the predicted motion of the subject's jaw.
    Type: Application
    Filed: July 12, 2019
    Publication date: January 14, 2021
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Gaspard Zoss
  • Patent number: 10871382
    Abstract: Various examples are directed to configuring a configurable hardware module to perform a measurement of a physical quantity. A configuration manager may receive an indication of the physical quantity and performance factor data describing the measurement of the physical quantity. The configuration manager may generate a hardware configuration of the hardware based at least in part on the indication of the physical quantity and the performance factor data. The hardware configuration may comprise instruction data to configure the hardware module to execute a dynamic measurement of the physical quantity. The configuration manager may also generate configuration data describing the hardware configuration, wherein the configuration data comprises simulation data comprising input parameters for a simulation of the hardware configuration and hardware configuration data for configuring a hardware module to implement at least a portion of the hardware configuration.
    Type: Grant
    Filed: February 22, 2017
    Date of Patent: December 22, 2020
    Assignee: Analog Devices International Unlimited Company
    Inventors: Colin G. Lyden, Claire Croke, Mack Roger Lund, Alan Clohessy, Meabh Shine, Rosemary B. Ryan, Aine M. Joyce, Aine McCarthy, Mary McCarthy, Thomas M. MacLeod, Jason Cockrell, Michael C. W. Coln, Gustavo Castro, Sean Kowalik, Colm P. Ronan, Michael Edward Bradley, Michael Mueck, Jonathan Ephraim David Hurwitz, Aileen Ritchie
  • Publication number: 20200364108
    Abstract: There is disclosed in an example an interconnect apparatus having: a root circuit; and a downstream circuit comprising at least one receiver; wherein the root circuit is operable to provide a margin test directive to the downstream circuit during a normal operating state; and the downstream circuit is operable to perform a margin test and provide a result report of the margin test to the root circuit. This may be performed in-band, for example in the L0 state. There is also disclosed a system comprising such an interconnect, and a method of performing margin testing.
    Type: Application
    Filed: May 29, 2020
    Publication date: November 19, 2020
    Applicant: Intel Corporation
    Inventors: Daniel S. Froelich, Debendra Das Sharma, Fulvio Spagna, Per E. Fornberg, David Edward Bradley
  • Patent number: D891693
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: July 28, 2020
    Assignee: CRED HOLDING COMPANY
    Inventors: Edward Bradley Godwin, III, Edward Willis Harmon, IV, Jason Defrancesco