Patents by Inventor Derek Edwards
Derek Edwards has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250252698Abstract: Techniques are disclosed for re-aging images of faces and three-dimensional (3D) geometry representing faces. In some embodiments, an image of a face, an input age, and a target age, are input into a re-aging model, which outputs a re-aging delta image that can be combined with the input image to generate a re-aged image of the face. In some embodiments, 3D geometry representing a face is re-aged using local 3D re-aging models that each include a blendshape model for finding a linear combination of sample patches from geometries of different facial identities and generating a new shape for the patch at a target age based on the linear combination. In some embodiments, 3D geometry representing a face is re-aged by performing a shape-from-shading technique using re-aged images of the face captured from different viewpoints, which can optionally be constrained to linear combinations of sample patches from local blendshape models.Type: ApplicationFiled: April 22, 2025Publication date: August 7, 2025Inventors: Gaspard ZOSS, Derek Edward BRADLEY, Prashanth CHANDRAN, Paulo Fabiano URNAU GOTARDO, Eftychios Dimitrios SIFAKIS
-
Publication number: 20250252533Abstract: The computational requirements of an encoder of an autoencoder can be reduced by pre-processing the images using a discrete wavelet transform (DWT). In one embodiment, the encoder uses a multi-level DWT to extract multiscale information from the input images. If using a learned encoder, performing the multi-level DWT enables the encoder to have less complex feature extraction and aggregation networks (e.g., convolution neural networks (CNNs)) than a standard encoder for an autoencoder. This means the VAE can execute faster, use less computational resources (such as GPU memory), and use less power than traditional VAEs. If using a non-learned encoder, the result of the multi-level DWT can be used as the latent code without using feature extraction and aggregation networks.Type: ApplicationFiled: January 31, 2025Publication date: August 7, 2025Inventors: Seyedmorteza SADAT, Jakob Joachim BUHMANN, Romann Matthew WEBER, Derek Edward BRADLEY
-
Patent number: 12375380Abstract: Embodiments herein describe a host that polls a network adapter to receive data from a network. That is, the host/CPU/application thread polls the network adapter (e.g., the network card, NIC, or SmartNIC) to determine whether a packet has been received. If so, the host informs the network adapter to store the packet (or a portion of the packet) in a CPU register. If the requested data has not yet been received by the network adapter from the network, the network adapter can delay the responding to the request to provide extra time for the adapter to receive the data from the network.Type: GrantFiled: July 13, 2023Date of Patent: July 29, 2025Assignee: XILINX, INC.Inventors: David James Riddoch, Derek Edward Roberts, Kieran Mansley, Steven Leslie Pope, Sebastian Turullols
-
Publication number: 20250238992Abstract: The present invention sets forth techniques for generating a facial animation. The techniques include receiving a latent identity code including a first set of features describing a neutral facial depiction associated with an identity and receiving a latent expression code including a second set of features describing a facial expression associated with the identity. The techniques also include generating, via a first machine learning model, an identity-specific facial representation based on a canonical facial representation and the latent identity code and generating, via a second machine learning model and based on the latent identity code, the latent expression code, and the identity-specific facial representation, a muscle actuation field tensor and one or more bone transformations associated with the deformed canonical facial representation.Type: ApplicationFiled: January 21, 2025Publication date: July 24, 2025Inventors: Derek Edward BRADLEY, Lingchen YANG, Gaspard ZOSS, Prashanth CHANDRAN, Barbara SOLENTHALER, Eftychios Dimitrios SIFAKIS
-
Patent number: 12367649Abstract: Methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject's face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).Type: GrantFiled: January 27, 2023Date of Patent: July 22, 2025Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Sebastian Winberg, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss, Derek Edward Bradley
-
Patent number: 12361663Abstract: Embodiments of the present disclosure are directed to methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Methods accord to embodiments may be useful for performing facial capture on subjects with dense facial hair. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject's face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).Type: GrantFiled: January 27, 2023Date of Patent: July 15, 2025Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Derek Edward Bradley, Paulo Fabiano Urnau Gotardo, Gaspard Zoss, Prashanth Chandran, Sebastian Winberg
-
Patent number: 12361634Abstract: Various embodiments include a system for rendering an object, such as human skin or a human head, from captured appearance data. The system includes a processor executing a near field lighting reconstruction module. The system determines at least one of a three-dimensional (3D) position or a 3D orientation of a lighting unit based on a plurality of captured images of a mirror sphere. For each point light source in a plurality of point light sources included in the lighting unit, the system determines an intensity associated with the point light source. The system determines captures appearance data of the object, where the object is illuminated by the lighting unit. The system renders an image of the object based on the appearance data and the intensities associated with each point light source in the plurality of point light sources.Type: GrantFiled: December 14, 2022Date of Patent: July 15, 2025Assignees: DISNEY ENTERPRISES, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Paulo Fabiano Urnau Gotardo, Derek Edward Bradley, Gaspard Zoss, Jeremy Riviere, Prashanth Chandran, Yingyan Xu
-
Patent number: 12340440Abstract: A technique for performing style transfer between a content sample and a style sample is disclosed. The technique includes applying one or more neural network layers to a first latent representation of the style sample to generate one or more convolutional kernels. The technique also includes generating convolutional output by convolving a second latent representation of the content sample with the one or more convolutional kernels. The technique further includes applying one or more decoder layers to the convolutional output to produce a style transfer result that comprises one or more content-based attributes of the content sample and one or more style-based attributes of the style sample.Type: GrantFiled: April 6, 2021Date of Patent: June 24, 2025Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Prashanth Chandran, Derek Edward Bradley, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
-
Patent number: 12322039Abstract: Various embodiments include a system for rendering an object, such as human skin or a human head, from captured appearance data comprising a plurality of texels. The system includes a processor executing a texture space indirect illumination module. The system determines texture coordinates of a vector originating from a first texel where the vector intersects a second texel. The system renders the second texel from the viewpoint of the first texel based on appearance data at the second texel. Based on the rendering of the second texel, the system determines an indirect lighting intensity incident to the first texel from the second texel. The system updates appearance data at the first texel based on a direct lighting intensity and the indirect lighting intensity. The system renders the first texel based on the updated appearance data at the first texel.Type: GrantFiled: December 14, 2022Date of Patent: June 3, 2025Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Paulo Fabiano Urnau Gotardo, Derek Edward Bradley, Gaspard Zoss, Jeremy Riviere, Prashanth Chandran, Yingyan Xu
-
Publication number: 20250166273Abstract: The present invention sets forth techniques for generating an animation sequence. The techniques include receiving one or more three-dimensional (3D) input meshes, wherein each input mesh includes a representation of an object included in a 3D scene. The techniques also include receiving, for each of the 3D input meshes, a virtual camera position associated with the 3D input mesh and one or more virtual lighting positions associated with the 3D input mesh. The techniques further include generating, for each of the 3D input meshes and via a trained machine learning model, one or more rendered frames associated with the 3D input mesh, wherein each rendered frame includes a two-dimensional (2D) representation of the object as viewed from the virtual camera position and illuminated by one or more virtual lights located at the one or more virtual lighting positions, and generating an output animation sequence based on the rendered frames.Type: ApplicationFiled: November 18, 2024Publication date: May 22, 2025Inventors: Derek Edward BRADLEY, Prashanth CHANDRAN, Sebastian Klaus WEISS, Yingyan XU, Gaspard ZOSS
-
Publication number: 20250141896Abstract: Disclosed are systems and methods for identifying threat events in an enterprise network and managing detection rules and responses to the events. A threat intelligence computer system can receive information about a detected threat event including a phase of attack and a detected domain of the threat event, apply at least one tag to the detected event that associates the event with at least one of the rules triggered in response to detecting the event, evaluate the tagged rules against the information, flag the event as having an improvement opportunity, determine whether the rule tagged to the event is a candidate for improvement, generate, based on the determination, instructions for improving the rule, generate a prioritization scheme indicating an order to address the instructions to improve the rule amongst instructions to improve various threat detection rules, and generate and return output indicating the prioritization scheme for presentation at user devices.Type: ApplicationFiled: October 27, 2023Publication date: May 1, 2025Inventors: Derek Edward Thomas, Kelsey Helms
-
Publication number: 20250118102Abstract: One embodiment of the present invention sets forth a technique for performing landmark detection. The technique includes generating, via execution of a first machine learning model, a first set of displacements associated with a first set of query points on a canonical shape based on a first annotation style associated with the first set of query points. The technique also includes determining, via execution of a second machine learning model, a first set of landmarks on a first face depicted in a first image based on the first set of displacements. The technique further includes training the first machine learning model based on one or more losses associated with the first set of landmarks to generate a first trained machine learning model.Type: ApplicationFiled: October 4, 2024Publication date: April 10, 2025Inventors: Prashanth CHANDRAN, Gaspard ZOSS, Derek Edward BRADLEY
-
Publication number: 20250118103Abstract: One embodiment of the present invention sets forth a technique for performing landmark detection. The technique includes applying, via execution of a first machine learning model, a first transformation to a first image depicting a first face to generate a second image. The technique also includes determining, via execution of a second machine learning model, a first set of landmarks on the first face based on the second image. The technique further includes training the first machine learning model based on one or more losses associated with the first set of landmarks to generate a first trained machine learning model.Type: ApplicationFiled: October 4, 2024Publication date: April 10, 2025Inventors: Prashanth CHANDRAN, Gaspard ZOSS, Derek Edward BRADLEY
-
Publication number: 20250118027Abstract: The present invention sets forth a technique for performing face micro detail recovery. The technique includes generating one or more skin texture displacement maps based on images of one or more skin surfaces. The technique also includes transferring, via one or more machine learning models, stylistic elements included in the one or more skin texture displacement maps onto one or more regions included in a modified three-dimensional (3D) facial reconstruction. The technique further includes generating a final 3D facial reconstruction that includes structural elements included in the 3D facial reconstruction and the stylistic elements included in the one or more skin texture displacement maps.Type: ApplicationFiled: October 4, 2024Publication date: April 10, 2025Inventors: Derek Edward BRADLEY, Sebastian Klaus WEISS, Prashanth CHANDRAN, Gaspard ZOSS, Jackson Reed STANHOPE
-
Publication number: 20250117626Abstract: A computing device is provided, including processor and a storage device holding instructions that are executable by the processor to implement a base artificial intelligence (AI) model and two or more delta AI models, each delta AI model having lower dimensionality than the base AI model. An inference request including an input prompt is received, the inference request specifying a selected delta AI model of the two or more delta AI models. The input prompt is input to the base AI model to thereby generate a base model result vector. The input prompt is input to the selected delta AI model to thereby generate a delta model result vector. An output vector is generated by combining the base model result vector and the delta model result vector via a combination operation. The output vector is output.Type: ApplicationFiled: October 9, 2023Publication date: April 10, 2025Applicant: Microsoft Technology Licensing, LLCInventors: Sanjay RAMANUJAN, Ciprian CHISALITA, Pei-Hsuan HSIEH, Derek Edward HYATT, Rakesh KELKAR, Karthik RAMAN
-
Publication number: 20250118025Abstract: One embodiment of the present invention sets forth a technique for performing landmark detection. The technique includes determining a first set of parameters associated with a depiction of a first face in a first image. The technique also includes generating, via execution of a first machine learning model, a first set of three-dimensional (3D) landmarks on the first face based on the first set of parameters, and projecting, based on the first set of parameters, the first set of 3D landmarks onto the first image to generate a first set of two-dimensional (2D) landmarks. The technique further includes training the first machine learning model based on one or more losses associated with the first set of 2D landmarks to generate a first trained machine learning model.Type: ApplicationFiled: October 4, 2024Publication date: April 10, 2025Inventors: Prashanth CHANDRAN, Gaspard ZOSS, Derek Edward BRADLEY
-
Patent number: 12243349Abstract: Embodiment of the present invention sets forth techniques for performing face reconstruction. The techniques include generating an identity mesh based on an identity encoding that represents an identity associated with a face in one or more images. The techniques also include generating an expression mesh based on an expression encoding that represents an expression associated with the face in the one or more images. The techniques also include generating, by a machine learning model, an output mesh of the face based on the identity mesh and the expression mesh.Type: GrantFiled: March 17, 2022Date of Patent: March 4, 2025Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Derek Edward Bradley, Prashanth Chandran, Simone Foti, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
-
Patent number: 12243140Abstract: A technique for rendering an input geometry includes generating a first segmentation mask for a first input geometry and a first set of texture maps associated with one or more portions of the first input geometry. The technique also includes generating, via one or more neural networks, a first set of neural textures for the one or more portions of the first input geometry. The technique further includes rendering a first image corresponding to the first input geometry based on the first segmentation mask, the first set of texture maps, and the first set of neural textures.Type: GrantFiled: November 15, 2021Date of Patent: March 4, 2025Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Derek Edward Bradley, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
-
Patent number: 12236517Abstract: Techniques are disclosed for generating photorealistic images of objects, such as heads, from multiple viewpoints. In some embodiments, a morphable radiance field (MoRF) model that generates images of heads includes an identity model that maps an identifier (ID) code associated with a head into two codes: a deformation ID code encoding a geometric deformation from a canonical head geometry, and a canonical ID code encoding a canonical appearance within a shape-normalized space. The MoRF model also includes a deformation field model that maps a world space position to a shape-normalized space position based on the deformation ID code. Further, the MoRF model includes a canonical neural radiance field (NeRF) model that includes a density multi-layer perceptron (MLP) branch, a diffuse MLP branch, and a specular MLP branch that output densities, diffuse colors, and specular colors, respectively. The MoRF model can be used to render images of heads from various viewpoints.Type: GrantFiled: November 8, 2022Date of Patent: February 25, 2025Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Derek Edward Bradley, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Daoye Wang, Gaspard Zoss
-
Publication number: 20250037341Abstract: The present invention sets forth a technique for performing facial rig generation. The technique includes generating a blendshape model including a plurality of vertices, a plurality of meshes, and a plurality of patches. The technique also includes modifying one or more blendweight values associated with each of the plurality of patches based on a plurality of facial depictions included in a facial database and one or more sample depictions of a target character and generating an output facial rig model based on the blendshape model and the one or more modified blendweight values. The technique further includes generating one or more expressive depictions of the target character based at least on the output facial rig.Type: ApplicationFiled: July 26, 2024Publication date: January 30, 2025Inventors: Prashanth CHANDRAN, Gaspard ZOSS, Derek Edward BRADLEY, Josefine Estrid KLINTBERG, Paulo Fabiano URNAU GOTARDO