Patents Assigned to Eidgenoessische Technische Hochschule Zuerich
-
Patent number: 12243140Abstract: A technique for rendering an input geometry includes generating a first segmentation mask for a first input geometry and a first set of texture maps associated with one or more portions of the first input geometry. The technique also includes generating, via one or more neural networks, a first set of neural textures for the one or more portions of the first input geometry. The technique further includes rendering a first image corresponding to the first input geometry based on the first segmentation mask, the first set of texture maps, and the first set of neural textures.Type: GrantFiled: November 15, 2021Date of Patent: March 4, 2025Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Derek Edward Bradley, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
-
Patent number: 12243349Abstract: Embodiment of the present invention sets forth techniques for performing face reconstruction. The techniques include generating an identity mesh based on an identity encoding that represents an identity associated with a face in one or more images. The techniques also include generating an expression mesh based on an expression encoding that represents an expression associated with the face in the one or more images. The techniques also include generating, by a machine learning model, an output mesh of the face based on the identity mesh and the expression mesh.Type: GrantFiled: March 17, 2022Date of Patent: March 4, 2025Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Derek Edward Bradley, Prashanth Chandran, Simone Foti, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
-
Patent number: 12236517Abstract: Techniques are disclosed for generating photorealistic images of objects, such as heads, from multiple viewpoints. In some embodiments, a morphable radiance field (MoRF) model that generates images of heads includes an identity model that maps an identifier (ID) code associated with a head into two codes: a deformation ID code encoding a geometric deformation from a canonical head geometry, and a canonical ID code encoding a canonical appearance within a shape-normalized space. The MoRF model also includes a deformation field model that maps a world space position to a shape-normalized space position based on the deformation ID code. Further, the MoRF model includes a canonical neural radiance field (NeRF) model that includes a density multi-layer perceptron (MLP) branch, a diffuse MLP branch, and a specular MLP branch that output densities, diffuse colors, and specular colors, respectively. The MoRF model can be used to render images of heads from various viewpoints.Type: GrantFiled: November 8, 2022Date of Patent: February 25, 2025Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Derek Edward Bradley, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Daoye Wang, Gaspard Zoss
-
Patent number: 12223577Abstract: One embodiment of the present invention sets forth a technique for generating actuation values based on a target shape such that the actuation values cause a simulator to output a simulated soft body that matches the target shape. The technique includes inputting a latent code that represents a target shape and a point on a geometric mesh into a first machine learning model. The technique further includes generating, via execution of the first machine learning model, one or more simulator control values that specify a deformation of the geometric mesh, where each of the simulator control values is based on the latent code and corresponds to the input point, and generating, via execution of the simulator, a simulated soft body based on the one or more simulator control values and the geometric mesh. The technique further includes causing the simulated soft body to be outputted to a computing device.Type: GrantFiled: January 25, 2023Date of Patent: February 11, 2025Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Gaspard Zoss, Baran Gözcü, Barbara Solenthaler, Lingchen Yang, Byungsoo Kim
-
Patent number: 12205213Abstract: A technique for rendering an input geometry includes generating a first segmentation mask for a first input geometry and a first set of texture maps associated with one or more portions of the first input geometry. The technique also includes generating, via one or more neural networks, a first set of neural textures for the one or more portions of the first input geometry. The technique further includes rendering a first image corresponding to the first input geometry based on the first segmentation mask, the first set of texture maps, and the first set of neural textures.Type: GrantFiled: November 15, 2021Date of Patent: January 21, 2025Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Derek Edward Bradley, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
-
Patent number: 12198225Abstract: A technique for synthesizing a shape includes generating a first plurality of offset tokens based on a first shape code and a first plurality of position tokens, wherein the first shape code represents a variation of a canonical shape, and wherein the first plurality of position tokens represent a first plurality of positions on the canonical shape. The technique also includes generating a first plurality of offsets associated with the first plurality of positions on the canonical shape based on the first plurality of offset tokens. The technique further includes generating the shape based on the first plurality of offsets and the first plurality of positions.Type: GrantFiled: February 18, 2022Date of Patent: January 14, 2025Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Derek Edward Bradley, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss
-
Patent number: 12180196Abstract: The invention relates to a compound of formula (I) wherein A1 and R1-R4 are as defined in the description and in the claims. The compound of formula (I) can be used as a medicament.Type: GrantFiled: December 17, 2020Date of Patent: December 31, 2024Assignees: HOFFMANN-LA ROCHE INC., EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICHInventors: Simon M. Ametamey, Luca Gobbi, Uwe Grether, Julian Kretz
-
Publication number: 20240430440Abstract: In some embodiments, a method trains a first parameter of a differentiable proxy codec to encode source content based on a first loss between first compressed source content and second compressed source content that is output by a target codec. A pre-processor pre-processes a source image to output a pre-processed source image, the pre-processing being based on a second parameter. The differentiable proxy codec encodes the pre-processed source image into a compressed pre-processed source image based on the first parameter. The method determines a second loss between the source image and the compressed pre-processed source image and determines an adjustment to the first parameter based on the second loss. The adjustment is used to adjust the second parameter of the pre-processor based on the second loss.Type: ApplicationFiled: October 19, 2023Publication date: December 26, 2024Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Yang Zhang, Mingyang Song, Christopher Richard Schroers, Tunc Ozan Aydin, Yuanyi Xue, Scott Labrozzi
-
Patent number: 12169778Abstract: A system includes a computing platform having a hardware processor and a memory storing a software code and a neural network (NN) having multiple layers including a last activation layer and a loss layer. The hardware processor executes the software code to identify different combinations of layers for testing the NN, each combination including candidate function(s) for the last activation layer and candidate function(s) for the loss layer. For each different combination, the software code configures the NN based on the combination, inputs, into the configured NN, a training dataset including multiple data objects, receives, from the configured NN, a classification of the data objects, and generates a performance assessment for the combination based on the classification. The software code determines a preferred combination of layers for the NN including selected candidate functions for the last activation layer and the loss layer, based on a comparison of the performance assessments.Type: GrantFiled: May 4, 2023Date of Patent: December 17, 2024Assignees: Disney Enterprises, Inc., ETH Zürich (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)Inventors: Hayko Jochen Wilhelm Riemenschneider, Leonhard Markus Helminger, Christopher Richard Schroers, Abdelaziz Djelouah
-
Patent number: 12141945Abstract: Techniques are disclosed for training and applying a denoising model. The denoising model includes multiple specialized denoisers and a generalizer, each of which is a machine learning model. The specialized denoisers are trained to denoise images associated with specific ranges of noise parameters. The generalizer is trained to generate per-pixel denoising kernels for denoising images associated with arbitrary noise parameters using outputs of the specialized denoisers. Subsequent to training, a noisy image, such as a live-action image or a rendered image, can be denoised by inputting the noisy image into the specialized denoisers to obtain intermediate denoised images that are then input, along with the noisy image, into the generalizer to obtain per-pixel denoising kernels, which can be normalized and applied to denoise the noisy image.Type: GrantFiled: February 19, 2020Date of Patent: November 12, 2024Assignees: Disney Enterprises, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Zhilin Cai, Tunc Ozan Aydin, Marco Manzi, Ahmet Cengiz Oztireli
-
Patent number: 12118734Abstract: Some implementations of the disclosure are directed to capturing facial training data for one or more subjects, the captured facial training data including each of the one or more subject's facial skin geometry tracked over a plurality of times and the subject's corresponding jaw poses for each of those plurality of times; and using the captured facial training data to create a model that provides a mapping from skin motion to jaw motion. Additional implementations of the disclosure are directed to determining a facial skin geometry of a subject; using a model that provides a mapping from skin motion to jaw motion to predict a motion of the subject's jaw from a rest pose given the facial skin geometry; and determining a jaw pose of the subject using the predicted motion of the subject's jaw.Type: GrantFiled: June 28, 2022Date of Patent: October 15, 2024Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Gaspard Zoss
-
Patent number: 12120359Abstract: A system processing hardware executes a machine learning (ML) model-based video compression encoder to receive uncompressed video content and corresponding motion compensated video content, compare the uncompressed and motion compensated video content to identify an image space residual, transform the image space residual to a latent space representation of the uncompressed video content, and transform, using a trained image compression ML model, the motion compensated video content to a latent space representation of the motion compensated video content.Type: GrantFiled: March 25, 2022Date of Patent: October 15, 2024Assignees: Disney Enterprises, Inc., ETH Zürich (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)Inventors: Abdelaziz Djelouah, Leonhard Markus Helminger, Roberto Gerson De Albuquerque Azevedo, Scott Labrozzi, Christopher Richard Schroers, Yuanyi Xue
-
Publication number: 20240305801Abstract: In some embodiments, a system includes a first component to extract temporal features from a current frame being coded and a previous frame of a video. A second component uses a first transformer to fuse spatial features from the current frame with the temporal features to generate spatio-temporal features as first output. A third component uses a second transformer to perform entropy coding using the first output and at least a portion of the temporal features to generate a second output. A fourth component uses a third transformer to reconstruct the current frame based on the first output that is processed using the second output and the temporal features.Type: ApplicationFiled: July 7, 2023Publication date: September 12, 2024Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Zhenghao Chen, Roberto Gerson De Albuquerque Azevedo, Christopher Richard Schroers, Yang Zhang, Lucas Relic
-
Publication number: 20240270725Abstract: The invention provides new reversible monoacylglycerol lipase (MAGL) inhibitors that are useful for the treatment or prophylaxis of diseases or conditions associated with MAGL. The reversible MAGL inhibitors according to the present invention may also be labeled with radioisotopes and are thus useful for medical imaging, such as positron-emission tomography (PET) and/or autoradiography.Type: ApplicationFiled: March 1, 2024Publication date: August 15, 2024Applicants: Hoffmann-La Roche Inc., Eidgenoessische Technische Hochschule ZuerichInventors: Luca Claudio GOBBI, Uwe Michael GRETHER, Yingfang HE, Bernd KUHN, Linjing MU
-
Publication number: 20240216599Abstract: An extracorporeal circuit support with a main liquid pump, the inlet of which can be connected to the blood circuit of a patient via at least one first liquid line and the outlet of which can be connected to the blood circuit via at least one second liquid line, an oxygenator for enriching the blood being conducted in the at least one second liquid line with oxygen, and a pump drive which drives the main liquid pump. The extracorporeal circuit support has, in addition to the main blood pump or main liquid pump and the oxygenator, the pump drive designed to be MR-conditional, i.e., MR-compliant under specific conditions, and is designed in the form of a gas expansion motor.Type: ApplicationFiled: October 29, 2022Publication date: July 4, 2024Applicants: ETH-Eidgenössische Technische Hochschule ZürichInventors: Michael Hofmann, Samuel SOLLBERGER, Martin Oliver SCHMIADY, Marianne SCHMID, Mirko MEBOLDT
-
Patent number: 12014143Abstract: In various embodiments, a phrase grounding model automatically performs phrase grounding for a source sentence and a source image. The phrase grounding model determines that a first phrase included in the source sentence matches a first region of the source image based on the first phrase and at least a second phrase included in the source sentence. The phrase grounding model then generates a matched pair that specifies the first phrase and the first region. Subsequently, one or more annotation operations are performed on the source image based on the matched pair. Advantageously, the accuracy of the phrase grounding model is increased relative to prior art solutions where the interrelationships between phrases are typically disregarded.Type: GrantFiled: February 25, 2019Date of Patent: June 18, 2024Assignees: DISNEY ENTERPRISES, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Pelin Dogan, Leonid Sigal, Markus Gross
-
Patent number: 11995749Abstract: Various embodiments disclosed herein provide techniques for generating image data of a three-dimensional (3D) animatable asset. A rendering module executing on a computer system accesses a machine learning model that has been trained via first image data of the 3D animatable asset generated from first rig vector data. The rendering module receives second rig vector data. The rendering module generates, via the machine learning model, a second image data of the 3D animatable asset based on the second rig vector data.Type: GrantFiled: March 5, 2020Date of Patent: May 28, 2024Assignees: DISNEY ENTERPRISES, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Dominik Borer, Jakob Buhmann, Martin Guay
-
Patent number: 11875441Abstract: A modeling engine generates a prediction model that quantifies and predicts secondary dynamics associated with the face of a performer enacting a performance. The modeling engine generates a set of geometric representations that represents the face of the performer enacting different facial expressions under a range of loading conditions. For a given facial expression and specific loading condition, the modeling engine trains a Machine Learning model to predict how soft tissue regions of the face of the performer change in response to external forces applied to the performer during the performance. The modeling engine combines different expression models associated with different facial expressions to generate a prediction model. The prediction model can be used to predict and remove secondary dynamics from a given geometric representation of a performance or to generate and add secondary dynamics to a given geometric representation of a performance.Type: GrantFiled: October 11, 2022Date of Patent: January 16, 2024Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Eftychios Dimitrios Sifakis, Gaspard Zoss
-
Patent number: 11836860Abstract: Methods and systems for performing facial retargeting using a patch-based technique are disclosed. One or more three-dimensional (3D) representations of a source character's (e.g., a human actor's) face can be transferred onto one or more corresponding representations of a target character's (e.g., a cartoon character's) face, enabling filmmakers to transfer a performance by a source character to a target character. The source character's 3D facial shape can separated into patches. For each patch, a patch combination (representing that patch as a combination of source reference patches) can be determined. The patch combinations and target reference patches can then be used to create target patches corresponding to the target character. The target patches can be combined using an anatomical local model solver to produce a 3D facial shape corresponding to the target character, effectively transferring a facial performance by the source character to the target character.Type: GrantFiled: January 27, 2022Date of Patent: December 5, 2023Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Prashanth Chandran, Loïc Florian Ciccone, Derek Edward Bradley
-
Publication number: 20230260186Abstract: Embodiments of the present disclosure are directed to methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Methods accord to embodiments may be useful for performing facial capture on subjects with dense facial hair. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject’s face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).Type: ApplicationFiled: January 27, 2023Publication date: August 17, 2023Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Sebastian Winberg, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss, Derek Edward Bradley