Patents Assigned to Eidgenoessische Technische Hochschule Zuerich
  • Patent number: 12120359
    Abstract: A system processing hardware executes a machine learning (ML) model-based video compression encoder to receive uncompressed video content and corresponding motion compensated video content, compare the uncompressed and motion compensated video content to identify an image space residual, transform the image space residual to a latent space representation of the uncompressed video content, and transform, using a trained image compression ML model, the motion compensated video content to a latent space representation of the motion compensated video content.
    Type: Grant
    Filed: March 25, 2022
    Date of Patent: October 15, 2024
    Assignees: Disney Enterprises, Inc., ETH Zürich (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Abdelaziz Djelouah, Leonhard Markus Helminger, Roberto Gerson De Albuquerque Azevedo, Scott Labrozzi, Christopher Richard Schroers, Yuanyi Xue
  • Patent number: 12118734
    Abstract: Some implementations of the disclosure are directed to capturing facial training data for one or more subjects, the captured facial training data including each of the one or more subject's facial skin geometry tracked over a plurality of times and the subject's corresponding jaw poses for each of those plurality of times; and using the captured facial training data to create a model that provides a mapping from skin motion to jaw motion. Additional implementations of the disclosure are directed to determining a facial skin geometry of a subject; using a model that provides a mapping from skin motion to jaw motion to predict a motion of the subject's jaw from a rest pose given the facial skin geometry; and determining a jaw pose of the subject using the predicted motion of the subject's jaw.
    Type: Grant
    Filed: June 28, 2022
    Date of Patent: October 15, 2024
    Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Gaspard Zoss
  • Publication number: 20240305801
    Abstract: In some embodiments, a system includes a first component to extract temporal features from a current frame being coded and a previous frame of a video. A second component uses a first transformer to fuse spatial features from the current frame with the temporal features to generate spatio-temporal features as first output. A third component uses a second transformer to perform entropy coding using the first output and at least a portion of the temporal features to generate a second output. A fourth component uses a third transformer to reconstruct the current frame based on the first output that is processed using the second output and the temporal features.
    Type: Application
    Filed: July 7, 2023
    Publication date: September 12, 2024
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Zhenghao Chen, Roberto Gerson De Albuquerque Azevedo, Christopher Richard Schroers, Yang Zhang, Lucas Relic
  • Publication number: 20240270725
    Abstract: The invention provides new reversible monoacylglycerol lipase (MAGL) inhibitors that are useful for the treatment or prophylaxis of diseases or conditions associated with MAGL. The reversible MAGL inhibitors according to the present invention may also be labeled with radioisotopes and are thus useful for medical imaging, such as positron-emission tomography (PET) and/or autoradiography.
    Type: Application
    Filed: March 1, 2024
    Publication date: August 15, 2024
    Applicants: Hoffmann-La Roche Inc., Eidgenoessische Technische Hochschule Zuerich
    Inventors: Luca Claudio GOBBI, Uwe Michael GRETHER, Yingfang HE, Bernd KUHN, Linjing MU
  • Publication number: 20240216599
    Abstract: An extracorporeal circuit support with a main liquid pump, the inlet of which can be connected to the blood circuit of a patient via at least one first liquid line and the outlet of which can be connected to the blood circuit via at least one second liquid line, an oxygenator for enriching the blood being conducted in the at least one second liquid line with oxygen, and a pump drive which drives the main liquid pump. The extracorporeal circuit support has, in addition to the main blood pump or main liquid pump and the oxygenator, the pump drive designed to be MR-conditional, i.e., MR-compliant under specific conditions, and is designed in the form of a gas expansion motor.
    Type: Application
    Filed: October 29, 2022
    Publication date: July 4, 2024
    Applicants: ETH-Eidgenössische Technische Hochschule Zürich
    Inventors: Michael Hofmann, Samuel SOLLBERGER, Martin Oliver SCHMIADY, Marianne SCHMID, Mirko MEBOLDT
  • Patent number: 12014143
    Abstract: In various embodiments, a phrase grounding model automatically performs phrase grounding for a source sentence and a source image. The phrase grounding model determines that a first phrase included in the source sentence matches a first region of the source image based on the first phrase and at least a second phrase included in the source sentence. The phrase grounding model then generates a matched pair that specifies the first phrase and the first region. Subsequently, one or more annotation operations are performed on the source image based on the matched pair. Advantageously, the accuracy of the phrase grounding model is increased relative to prior art solutions where the interrelationships between phrases are typically disregarded.
    Type: Grant
    Filed: February 25, 2019
    Date of Patent: June 18, 2024
    Assignees: DISNEY ENTERPRISES, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Pelin Dogan, Leonid Sigal, Markus Gross
  • Patent number: 11995749
    Abstract: Various embodiments disclosed herein provide techniques for generating image data of a three-dimensional (3D) animatable asset. A rendering module executing on a computer system accesses a machine learning model that has been trained via first image data of the 3D animatable asset generated from first rig vector data. The rendering module receives second rig vector data. The rendering module generates, via the machine learning model, a second image data of the 3D animatable asset based on the second rig vector data.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: May 28, 2024
    Assignees: DISNEY ENTERPRISES, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Dominik Borer, Jakob Buhmann, Martin Guay
  • Patent number: 11875441
    Abstract: A modeling engine generates a prediction model that quantifies and predicts secondary dynamics associated with the face of a performer enacting a performance. The modeling engine generates a set of geometric representations that represents the face of the performer enacting different facial expressions under a range of loading conditions. For a given facial expression and specific loading condition, the modeling engine trains a Machine Learning model to predict how soft tissue regions of the face of the performer change in response to external forces applied to the performer during the performance. The modeling engine combines different expression models associated with different facial expressions to generate a prediction model. The prediction model can be used to predict and remove secondary dynamics from a given geometric representation of a performance or to generate and add secondary dynamics to a given geometric representation of a performance.
    Type: Grant
    Filed: October 11, 2022
    Date of Patent: January 16, 2024
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Eftychios Dimitrios Sifakis, Gaspard Zoss
  • Patent number: 11836860
    Abstract: Methods and systems for performing facial retargeting using a patch-based technique are disclosed. One or more three-dimensional (3D) representations of a source character's (e.g., a human actor's) face can be transferred onto one or more corresponding representations of a target character's (e.g., a cartoon character's) face, enabling filmmakers to transfer a performance by a source character to a target character. The source character's 3D facial shape can separated into patches. For each patch, a patch combination (representing that patch as a combination of source reference patches) can be determined. The patch combinations and target reference patches can then be used to create target patches corresponding to the target character. The target patches can be combined using an anatomical local model solver to produce a 3D facial shape corresponding to the target character, effectively transferring a facial performance by the source character to the target character.
    Type: Grant
    Filed: January 27, 2022
    Date of Patent: December 5, 2023
    Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Prashanth Chandran, Loïc Florian Ciccone, Derek Edward Bradley
  • Publication number: 20230260186
    Abstract: Embodiments of the present disclosure are directed to methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Methods accord to embodiments may be useful for performing facial capture on subjects with dense facial hair. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject’s face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).
    Type: Application
    Filed: January 27, 2023
    Publication date: August 17, 2023
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Sebastian Winberg, Prashanth Chandran, Paulo Fabiano Urnau Gotardo, Gaspard Zoss, Derek Edward Bradley
  • Patent number: 11716376
    Abstract: A system for managing non-linear transmedia content data is provided. Memory stores a plurality of transmedia content data items and associated linking data which define time-ordered content links between the plurality of transmedia content data items. The plurality of transmedia content data items are arranged into linked transmedia content subsets comprising different groups of the transmedia content data items and different content links therebetween. A control engine receives one or more instructions to create a new time-ordered content link between at least two of the plurality of transmedia content data items. The control engine modifies the linking data stored in the memory to include the new time-ordered content link.
    Type: Grant
    Filed: September 26, 2016
    Date of Patent: August 1, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Max Grosse, Barbara Solenthaler, Peter Kaufmann, Markus Gross, Sasha Schriber
  • Publication number: 20230237739
    Abstract: Methods and systems for performing facial retargeting using a patch-based technique are disclosed. One or more three-dimensional (3D) representations of a source character's (e.g., a human actor's) face can be transferred onto one or more corresponding representations of a target character's (e.g., a cartoon character's) face, enabling filmmakers to transfer a performance by a source character to a target character. The source character's 3D facial shape can separated into patches. For each patch, a patch combination (representing that patch as a combination of source reference patches) can be determined. The patch combinations and target reference patches can then be used to create target patches corresponding to the target character. The target patches can be combined using an anatomical local model solver to produce a 3D facial shape corresponding to the target character, effectively transferring a facial performance by the source character to the target character.
    Type: Application
    Filed: January 27, 2022
    Publication date: July 27, 2023
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Prashanth Chandran, Loïc Florian Ciccone, Derek Edward Bradley
  • Publication number: 20230237753
    Abstract: Embodiments of the present disclosure are directed to methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Methods accord to embodiments may be useful for performing facial capture on subjects with dense facial hair. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject's face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).
    Type: Application
    Filed: January 27, 2023
    Publication date: July 27, 2023
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Derek Edward Bradley, Paulo Fabiano Urnau Gotardo, Gaspard Zoss, Prashanth Chandran, Sebastian Winberg
  • Patent number: 11704853
    Abstract: Techniques are disclosed for learning a machine learning model that maps control data, such as renderings of skeletons, and associated three-dimensional (3D) information to two-dimensional (2D) renderings of a character. The machine learning model may be an adaptation of the U-Net architecture that accounts for 3D information and is trained using a perceptual loss between images generated by the machine learning model and ground truth images. Once trained, the machine learning model may be used to animate a character, such as in the context of previsualization or a video game, based on control of associated control points.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: July 18, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Dominik Tobias Borer, Martin Guay, Jakob Joachim Buhmann, Robert Walker Sumner
  • Patent number: 11669999
    Abstract: In various embodiments, a training application generates training items for three-dimensional (3D) pose estimation. The training application generates multiple posed 3D models based on multiple 3D poses and a 3D model of a person wearing a costume that is associated with multiple visual attributes. For each posed 3D model, the training application performs rendering operation(s) to generate synthetic image(s). For each synthetic image, the training application generates a training item based on the synthetic image and the 3D pose associated with the posed 3D model from which the synthetic image was rendered. The synthetic images are included in a synthetic training dataset that is tailored for training a machine-learning model to compute estimated 3D poses of persons from two-dimensional (2D) input images. Advantageously, the synthetic training dataset can be used to train the machine-learning model to accurately infer the orientations of persons across a wide range of environments.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: June 6, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Martin Guay, Maurizio Nitti, Jakob Joachim Buhmann, Dominik Tobias Borer
  • Patent number: 11669723
    Abstract: A system includes a computing platform having a hardware processor and a memory storing a software code and a neural network (NN) having multiple layers including a last activation layer and a loss layer. The hardware processor executes the software code to identify different combinations of layers for testing the NN, each combination including candidate function(s) for the last activation layer and candidate function(s) for the loss layer. For each different combination, the software code configures the NN based on the combination, inputs, into the configured NN, a training dataset including multiple data objects, receives, from the configured NN, a classification of the data objects, and generates a performance assessment for the combination based on the classification. The software code determines a preferred combination of layers for the NN including selected candidate functions for the last activation layer and the loss layer, based on a comparison of the performance assessments.
    Type: Grant
    Filed: September 16, 2022
    Date of Patent: June 6, 2023
    Assignees: Disney Enterprises, Inc., ETH Zürich (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Hayko Jochen Wilhelm Riemenschneider, Leonhard Markus Helminger, Christopher Richard Schroers, Abdelaziz Djelouah
  • Patent number: 11655243
    Abstract: The invention relates to a compound of formula (I) wherein A1, A2 and R1-R5 are as defined in the description and in the claims. The compound of formula (I) can be used as a medicament.
    Type: Grant
    Filed: December 17, 2020
    Date of Patent: May 23, 2023
    Assignees: HOFFMANN-LA ROCHE INC., EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH
    Inventors: Simon M. Ametamey, Kenneth Atz, Luca Gobbi, Uwe Grether, Wolfgang Guba, Julian Kretz
  • Patent number: 11645813
    Abstract: Techniques are disclosed for creating digital faces. In some examples, an anatomical face model is generated from a data set including captured facial geometries of different individuals and associated bone geometries. A model generator segments each of the captured facial geometries into patches, compresses the segmented geometry associated with each patch to determine local deformation subspaces of the anatomical face model, and determines corresponding compressed anatomical subspaces of the anatomical face model. A sculpting application determines, based on sculpting input from a user, constraints for an optimization to determine parameter values associated with the anatomical face model. The parameter values can be used, along with the anatomical face model, to generate facial geometry that reflects the sculpting input.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: May 9, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Aurel Gruber, Marco Fratarcangeli, Derek Edward Bradley, Gaspard Zoss, Dominik Thabo Beeler
  • Publication number: 20230133209
    Abstract: Disclosed herein are contiguous DNA sequences encoding highly compact multi-input genetic logic gates for precise in vivo cell targeting, and methods of treating disease using a combination of in vivo delivery and such contiguous DNA sequences.
    Type: Application
    Filed: April 14, 2021
    Publication date: May 4, 2023
    Applicant: Eidgenössische Technische Hochschule Zürich
    Inventors: Yaakov Benenson, Bartolomeo Angelici
  • Patent number: 11615555
    Abstract: A method of generating a training data set for training an image matting machine learning model includes receiving a plurality of foreground images, generating a plurality of composited foreground images by compositing randomly selected foreground images from the plurality of foreground images, and generating a plurality of training images by compositing each composited foreground image with a randomly selected background image. The training data set includes the plurality of training images.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: March 28, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Tunc Ozan Aydin, Ahmet Cengiz Öztireli, Jingwei Tang, Yagiz Aksoy