Patents by Inventor Edward Bradley
Edward Bradley has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250235793Abstract: A method of training an autonomous agent is provided, the method comprising: providing videogame data generated by a human playing a videogame as input data to a training network for training an autonomous agent for playing the videogame; generating videogame data of the trained autonomous agent playing the videogame; providing the videogame data of the trained autonomous agent playing the videogame data to a discriminator of a generative adversarial network ‘GAN’, the discriminator of the GAN being trained to distinguish videogame data of a human playing the videogame and videogame data of an autonomous agent playing the videogame; generating a classification, by the discriminator of the GAN, of the output videogame data of the trained autonomous agent playing the videogame as human or agent; and updating at least one of the training network and the discriminator of the GAN based on the classification generated by the discriminator of the GAN.Type: ApplicationFiled: January 2, 2025Publication date: July 24, 2025Applicant: Sony Interactive Entertainment Inc.Inventors: Timothy Edward Bradley, Ayush Raina, Ryan John Spick, Pierluigi Vito Amadori, Guy David Moss
-
Patent number: 12367649Abstract: Methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject's face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).Type: GrantFiled: January 27, 2023Date of Patent: July 22, 2025Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Sebastian Winberg, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss, Derek Edward Bradley
-
Patent number: 12361663Abstract: Embodiments of the present disclosure are directed to methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Methods accord to embodiments may be useful for performing facial capture on subjects with dense facial hair. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject's face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).Type: GrantFiled: January 27, 2023Date of Patent: July 15, 2025Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Derek Edward Bradley, Paulo Fabiano Urnau Gotardo, Gaspard Zoss, Prashanth Chandran, Sebastian Winberg
-
Patent number: 12361634Abstract: Various embodiments include a system for rendering an object, such as human skin or a human head, from captured appearance data. The system includes a processor executing a near field lighting reconstruction module. The system determines at least one of a three-dimensional (3D) position or a 3D orientation of a lighting unit based on a plurality of captured images of a mirror sphere. For each point light source in a plurality of point light sources included in the lighting unit, the system determines an intensity associated with the point light source. The system determines captures appearance data of the object, where the object is illuminated by the lighting unit. The system renders an image of the object based on the appearance data and the intensities associated with each point light source in the plurality of point light sources.Type: GrantFiled: December 14, 2022Date of Patent: July 15, 2025Assignees: DISNEY ENTERPRISES, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Paulo Fabiano Urnau Gotardo, Derek Edward Bradley, Gaspard Zoss, Jeremy Riviere, Prashanth Chandran, Yingyan Xu
-
Patent number: 12340440Abstract: A technique for performing style transfer between a content sample and a style sample is disclosed. The technique includes applying one or more neural network layers to a first latent representation of the style sample to generate one or more convolutional kernels. The technique also includes generating convolutional output by convolving a second latent representation of the content sample with the one or more convolutional kernels. The technique further includes applying one or more decoder layers to the convolutional output to produce a style transfer result that comprises one or more content-based attributes of the content sample and one or more style-based attributes of the style sample.Type: GrantFiled: April 6, 2021Date of Patent: June 24, 2025Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Prashanth Chandran, Derek Edward Bradley, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
-
Publication number: 20250177858Abstract: A data processing apparatus comprises a captioning model to receive gameplay telemetry data indicative of one or more in-game properties for a session of a video game, the captioning model comprising an artificial neural network (ANN) trained to output caption data comprising one or more captions in dependence upon a learned mapping between gameplay telemetry data and caption data, one or more of the captions comprising one or more words for providing a visual description for the session of the video game, and output circuitry to output one or more of the captions.Type: ApplicationFiled: November 27, 2024Publication date: June 5, 2025Inventors: Ryan John Spick, Timothy Edward Bradley, Guy David Moss, Pierluigi Vito Amadori, Ayush Raina
-
Patent number: 12322039Abstract: Various embodiments include a system for rendering an object, such as human skin or a human head, from captured appearance data comprising a plurality of texels. The system includes a processor executing a texture space indirect illumination module. The system determines texture coordinates of a vector originating from a first texel where the vector intersects a second texel. The system renders the second texel from the viewpoint of the first texel based on appearance data at the second texel. Based on the rendering of the second texel, the system determines an indirect lighting intensity incident to the first texel from the second texel. The system updates appearance data at the first texel based on a direct lighting intensity and the indirect lighting intensity. The system renders the first texel based on the updated appearance data at the first texel.Type: GrantFiled: December 14, 2022Date of Patent: June 3, 2025Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Paulo Fabiano Urnau Gotardo, Derek Edward Bradley, Gaspard Zoss, Jeremy Riviere, Prashanth Chandran, Yingyan Xu
-
Publication number: 20250173911Abstract: A decoder apparatus comprises receiving circuitry to receive caption data indicative of a language-based description for a first image and encoded data representative of the first image; and decoder circuitry comprising one or more trained machine learning models operable to generate a reconstructed image in dependence on the caption data and the encoded data, the reconstructed image having a higher image quality than an image quality associated with the encoded data representative of the first image.Type: ApplicationFiled: November 20, 2024Publication date: May 29, 2025Inventors: Pierluigi Vito Amadori, Timothy Edward Bradley, Ayush Raina, Guy David Moss, Ryan John Spick
-
Patent number: 12243349Abstract: Embodiment of the present invention sets forth techniques for performing face reconstruction. The techniques include generating an identity mesh based on an identity encoding that represents an identity associated with a face in one or more images. The techniques also include generating an expression mesh based on an expression encoding that represents an expression associated with the face in the one or more images. The techniques also include generating, by a machine learning model, an output mesh of the face based on the identity mesh and the expression mesh.Type: GrantFiled: March 17, 2022Date of Patent: March 4, 2025Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Derek Edward Bradley, Prashanth Chandran, Simone Foti, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
-
Patent number: 12243140Abstract: A technique for rendering an input geometry includes generating a first segmentation mask for a first input geometry and a first set of texture maps associated with one or more portions of the first input geometry. The technique also includes generating, via one or more neural networks, a first set of neural textures for the one or more portions of the first input geometry. The technique further includes rendering a first image corresponding to the first input geometry based on the first segmentation mask, the first set of texture maps, and the first set of neural textures.Type: GrantFiled: November 15, 2021Date of Patent: March 4, 2025Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Derek Edward Bradley, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
-
Patent number: 12236517Abstract: Techniques are disclosed for generating photorealistic images of objects, such as heads, from multiple viewpoints. In some embodiments, a morphable radiance field (MoRF) model that generates images of heads includes an identity model that maps an identifier (ID) code associated with a head into two codes: a deformation ID code encoding a geometric deformation from a canonical head geometry, and a canonical ID code encoding a canonical appearance within a shape-normalized space. The MoRF model also includes a deformation field model that maps a world space position to a shape-normalized space position based on the deformation ID code. Further, the MoRF model includes a canonical neural radiance field (NeRF) model that includes a density multi-layer perceptron (MLP) branch, a diffuse MLP branch, and a specular MLP branch that output densities, diffuse colors, and specular colors, respectively. The MoRF model can be used to render images of heads from various viewpoints.Type: GrantFiled: November 8, 2022Date of Patent: February 25, 2025Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Derek Edward Bradley, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Daoye Wang, Gaspard Zoss
-
Patent number: 12212078Abstract: Multi-band phased array antennas include a backplane, a vertical array of low-band radiating elements that form a first antenna beam, first and second vertical arrays of high-band radiating elements that form respective second and third antenna beams and a vertical array of RF lenses. The first, second and third antenna beams point in different directions. A respective one of the second radiating elements and a respective one of the third radiating elements are positioned between the backplane and each RF lens, and at least some of the first radiating elements are positioned between the RF lenses.Type: GrantFiled: February 6, 2024Date of Patent: January 28, 2025Assignee: Outdoor Wireless Networks LLCInventors: Scott Michaelis, Igor Timofeev, Edward Bradley
-
Patent number: 12205213Abstract: A technique for rendering an input geometry includes generating a first segmentation mask for a first input geometry and a first set of texture maps associated with one or more portions of the first input geometry. The technique also includes generating, via one or more neural networks, a first set of neural textures for the one or more portions of the first input geometry. The technique further includes rendering a first image corresponding to the first input geometry based on the first segmentation mask, the first set of texture maps, and the first set of neural textures.Type: GrantFiled: November 15, 2021Date of Patent: January 21, 2025Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Derek Edward Bradley, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
-
Patent number: 12198225Abstract: A technique for synthesizing a shape includes generating a first plurality of offset tokens based on a first shape code and a first plurality of position tokens, wherein the first shape code represents a variation of a canonical shape, and wherein the first plurality of position tokens represent a first plurality of positions on the canonical shape. The technique also includes generating a first plurality of offsets associated with the first plurality of positions on the canonical shape based on the first plurality of offset tokens. The technique further includes generating the shape based on the first plurality of offsets and the first plurality of positions.Type: GrantFiled: February 18, 2022Date of Patent: January 14, 2025Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Derek Edward Bradley, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
-
Patent number: 12118734Abstract: Some implementations of the disclosure are directed to capturing facial training data for one or more subjects, the captured facial training data including each of the one or more subject's facial skin geometry tracked over a plurality of times and the subject's corresponding jaw poses for each of those plurality of times; and using the captured facial training data to create a model that provides a mapping from skin motion to jaw motion. Additional implementations of the disclosure are directed to determining a facial skin geometry of a subject; using a model that provides a mapping from skin motion to jaw motion to predict a motion of the subject's jaw from a rest pose given the facial skin geometry; and determining a jaw pose of the subject using the predicted motion of the subject's jaw.Type: GrantFiled: June 28, 2022Date of Patent: October 15, 2024Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Gaspard Zoss
-
Patent number: 12111880Abstract: Various embodiments set forth systems and techniques for changing a face within an image. The techniques include receiving a first image including a face associated with a first facial identity; generating, via a machine learning model, at least a first texture map and a first position map based on the first image; rendering a second image including a face associated with a second facial identity based on the first texture map and the first position map, wherein the second facial identity is different from the first facial identity.Type: GrantFiled: September 24, 2021Date of Patent: October 8, 2024Assignees: DISNEY ENTERPRISES, INC., ETH Zurich (Eidgenssische Technische Hochschule Zurich)Inventors: Jacek Krzysztof Naruniec, Derek Edward Bradley, Paulo Fabiano Urnau Gotardo, Leonhard Markus Helminger, Christopher Andreas Otto, Christopher Richard Schroers, Romann Matthew Weber
-
Publication number: 20240333263Abstract: Methods and apparatus are disclosed to improve flip-flop toggle efficiency.Type: ApplicationFiled: March 28, 2023Publication date: October 3, 2024Inventors: Chinmay Pradeep Joshi, Dinesh Somasekhar, David Edward Bradley, Radhika Kudva
-
Patent number: 12086927Abstract: One embodiment of the present invention sets forth a technique for performing appearance capture. The technique includes receiving a first sequence of images of an object, wherein the first sequence of images includes a first set of images interleaved with a second set of images, and wherein the first set of images is captured based on illumination of the object using a first lighting pattern and the second set of images is captured based on illumination of the object using one or more lighting patterns that are different from the first lighting pattern. The technique also includes generating a first set of appearance parameters associated with the object based on a first inverse rendering associated with the first sequence of images.Type: GrantFiled: October 20, 2021Date of Patent: September 10, 2024Assignee: DISNEY ENTERPRISES, INC.Inventors: Paulo Fabiano Urnau Gotardo, Derek Edward Bradley, Jérémy Riviere
-
Publication number: 20240284011Abstract: A data processing apparatus for determining description data for describing content includes: a video captioning model to receive an input comprising at least video images associated with the content, wherein the video captioning model is trained to detect one or more predetermined motions of one or more animated objects in the video images and determine one or more captions in dependence on one or more of the predetermined motions, one or more of the captions comprising respective caption data comprising one or more words for describing one or more of the predetermined motions, the respective caption data comprising one or more of audio data, text data and image data; and output circuitry to output description data in dependence on one or more of the captions.Type: ApplicationFiled: February 13, 2024Publication date: August 22, 2024Applicant: Sony Interactive Entertainment Inc.Inventors: Ryan Spick, Timothy Edward Bradley, Guy David Moss, Ayush Raina, Pierluigi Amadori
-
Patent number: 12056807Abstract: An image rendering method for rendering a pixel at a viewpoint includes, for a first element of a virtual scene, having a predetermined surface at a position within that scene; providing the position and a direction based on the viewpoint to a machine learning system previously trained to predict a factor that, when combined with a distribution function that characterises an interaction of light with the predetermined surface, generates a pixel value corresponding to the first element of the virtual scene as illuminated at the position, combining the predicted factor from the machine learning system with the distribution function to generate the pixel value corresponding to the illuminated first element of the virtual scene at the position, and incorporating the pixel value into a rendered image for display, where the machine learning system was previously trained with a training set based on images comprising multiple lighting conditions.Type: GrantFiled: March 18, 2022Date of Patent: August 6, 2024Assignee: Sony Interactive Entertainment Inc.Inventors: Fabio Cappello, Matthew Sanders, Marina Villanueva Barreiro, Timothy Edward Bradley, Andrew James Bigos