Patents Assigned to ETH ZURICH (EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZURICH)
  • Patent number: 11276231
    Abstract: Techniques are disclosed for training and applying nonlinear face models. In embodiments, a nonlinear face model includes an identity encoder, an expression encoder, and a decoder. The identity encoder takes as input a representation of a facial identity, such as a neutral face mesh minus a reference mesh, and outputs a code associated with the facial identity. The expression encoder takes as input a representation of a target expression, such as a set of blendweight values, and outputs a code associated with the target expression. The codes associated with the facial identity and the facial expression can be concatenated and input into the decoder, which outputs a representation of a face having the facial identity and expression. The representation of the face can include vertex displacements for deforming the reference mesh.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: March 15, 2022
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Prashanth Chandran, Dominik Thabo Beeler, Derek Edward Bradley
  • Patent number: 11257276
    Abstract: Techniques are disclosed for generating digital faces. In some examples, a style-based generator receives as inputs initial tensor(s) and style vector(s) corresponding to user-selected semantic attribute styles, such as the desired expression, gender, age, identity, and/or ethnicity of a digital face. The style-based generator is trained to process such inputs and output low-resolution appearance map(s) for the digital face, such as a texture map, a normal map, and/or a specular roughness map. The low-resolution appearance map(s) are further processed using a super-resolution generator that is trained to take the low-resolution appearance map(s) and low-resolution 3D geometry of the digital face as inputs and output high-resolution appearance map(s) that align with high-resolution 3D geometry of the digital face. Such high-resolution appearance map(s) and high-resolution 3D geometry can then be used to render standalone images or the frames of a video that include the digital face.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: February 22, 2022
    Assignees: DISNEY ENTERPRISES, INC., ETH ZURICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Prashanth Chandran, Dominik Thabo Beeler, Derek Edward Bradley
  • Patent number: 11226763
    Abstract: The invention is notably directed at a device for high-dimensional computing comprising an associative memory module. The associative memory module comprises one or more planar crossbar arrays. The one or more planar crossbar arrays comprise a plurality of resistive memory elements. The device is configured to program profile vector elements of profile hypervectors as conductance states of the resistive memory elements and to apply query vector elements of query hypervectors as read voltages to the one or more crossbar arrays. The device is further configured to perform a distance computation between the profile hypervectors and the query hypervectors by measuring output current signals of the one or more crossbar arrays. The invention further concerns a related method and a related computer program product.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: January 18, 2022
    Assignees: International Business Machines Corporation, ETH ZURICH (EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZURICH)
    Inventors: Manuel Le Gallo-Bourdeau, Kumudu Geethan Karunaratne, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi, Luca Benini
  • Patent number: 11222466
    Abstract: Techniques are disclosed for changing the identities of faces in video frames and images. In embodiments, three-dimensional (3D) geometry of a face is used to inform the facial identity change produced by an image-to-image translation model, such as a comb network model. In some embodiments, the model can take a two-dimensional (2D) texture map and/or a 3D displacement map associated with one facial identity as inputs and output another 2D texture map and/or 3D displacement map associated with a different facial identity. The other 2D texture map and/or 3D displacement map can then be used to render an image that includes the different facial identity.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: January 11, 2022
    Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Jacek Krzysztof Naruniec, Derek Edward Bradley, Thomas Etterlin, Paulo Fabiano Urnau Gotardo, Leonhard Markus Helminger, Christopher Richard Schroers, Romann Matthew Weber
  • Patent number: 11210774
    Abstract: According to one implementation, a pixel error detection system includes a hardware processor and a system memory storing a software code. The hardware processor is configured to execute the software code to receive an input image, to mask, using an inpainting neural network (NN), one or more patch(es) of the input image, and to inpaint, using the inpainting NN, the masked patch(es) based an input image pixels neighboring each of the masked patch(es). The hardware processor is configured to further execute the software code to generate, using the inpainting NN, a residual image based on differences between the inpainted masked patch(es) and the patch(es) in the input image and to identify one or more anomalous pixel(s) in the input image using the residual image.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: December 28, 2021
    Assignees: Disney Enterprises, Inc., ETH Zürich (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Christopher Richard Schroers, Abdelaziz Djelouah, Sutao Wang, Erika Varis Doggett
  • Patent number: 11132051
    Abstract: This disclosure presents systems and methods to provide an interactive environment in response to touch-based inputs. A first body channel communication device coupled to a user may transmit and/or receive signals configured to be propagated along skin of the user such that the skin of the user comprises a signal transmission path. A second body channel communication device coupled to an interaction entity may be configured to transmit and/or receive signals configured to be propagated along the skin of the user along the signal transmission path. A presentation device may present images of virtual content to the user. Information may be communicated between the first body channel communication device, the second body channel communication device, and the presentation device so that virtual content specific to the interaction entity may be presented to augment an appearance of the interaction entity.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: September 28, 2021
    Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Robert Sumner, Benjamin Buergisser, Fabio Zünd, Gergely Vakulya, Virag Varga, Thomas Gross, Alanson Sample
  • Patent number: 11087517
    Abstract: In particular embodiments, a 2D representation of an object may be provided. A first method may comprise: receiving sketch input identifying a target position for a specified portion of the object; computing a deformation for the object within the context of a character rig specification for the object; and displaying an updated version of the object. A second method may comprise detecting sketch input; classifying the sketch input, based on the 2D representation, as an instantiation of the object; instantiating the object using a 3D model of the object; and displaying a 3D visual representation of the object.
    Type: Grant
    Filed: June 2, 2016
    Date of Patent: August 10, 2021
    Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Robert Walker Sumner, Maurizio Nitti, Stelian Coros, Bernhard Thomaszewski, Fabian Andreas Hahn, Markus Gross, Frederik Rudolf Mutzel
  • Patent number: 10984558
    Abstract: Techniques are disclosed for image matting. In particular, embodiments decompose the matting problem of estimating foreground opacity into the targeted subproblems of estimating a background using a first trained neural network, estimating a foreground using a second neural network and the estimated background as one of the inputs into the second neural network, and estimating an alpha matte using a third neural network and the estimated background and foreground as two of the inputs into the third neural network. Such a decomposition is in contrast to traditional sampling-based matting approaches that estimated foreground and background color pairs together directly for each pixel. By decomposing the matting problem into subproblems that are easier for a neural network to learn compared to traditional data-driven techniques for image matting, embodiments disclosed herein can produce better opacity estimates than such data-driven techniques as well as sampling-based and affinity-based matting approaches.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: April 20, 2021
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Tunc Ozan Aydin, Ahmet Cengiz Öztireli, Jingwei Tang, Yagiz Aksoy
  • Patent number: 10970849
    Abstract: According to one implementation, a pose estimation and body tracking system includes a computing platform having a hardware processor and a system memory storing a software code including a tracking module trained to track motions. The software code receives a series of images of motion by a subject, and for each image, uses the tracking module to determine locations corresponding respectively to two-dimensional (2D) skeletal landmarks of the subject based on constraints imposed by features of a hierarchical skeleton model intersecting at each 2D skeletal landmark. The software code further uses the tracking module to infer joint angles of the subject based on the locations and determine a three-dimensional (3D) pose of the subject based on the locations and the joint angles, resulting in a series of 3D poses. The software code outputs a tracking image corresponding to the motion by the subject based on the series of 3D poses.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: April 6, 2021
    Assignees: Disney Enterprises, Inc., ETH Zürich (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Ahmet Cengiz Öztireli, Prashanth Chandran, Markus Gross
  • Patent number: 10971226
    Abstract: The device provides a resistive memory device for storing elements of hyper-dimensional vectors, in particular digital hyper-dimensional, as conductive statuses in components in particular in 2D-memristors, of the resistive memory device, wherein the resistive memory device provides a first crossbar array of the components, wherein the components are memristive 2D components addressable by word-lines and bit-lines, and a peripheral circuit connected to the word-lines and bit-lines and adapted for encoding operations by activating the word-lines and bit-lines sequentially in a predefined manner.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: April 6, 2021
    Assignees: International Business Machines Corporation, ETH ZURICH (EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZURICH)
    Inventors: Manuel Le Gallo-Bourdeau, Kumudu Geethan Karunaratne, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi, Luca Benini
  • Patent number: 10916046
    Abstract: Techniques are disclosed for estimating poses from images. In one embodiment, a machine learning model, referred to herein as the “detector,” is trained to estimate animal poses from images in a bottom-up fashion. In particular, the detector may be trained using rendered images depicting animal body parts scattered over realistic backgrounds, as opposed to renderings of full animal bodies. In order to make appearances of the rendered body parts more realistic so that the detector can be trained to estimate poses from images of real animals, the body parts may be rendered using textures that are determined from a translation of rendered images of the animal into corresponding images with more realistic textures via adversarial learning. Three-dimensional poses may also be inferred from estimated joint locations using, e.g., inverse kinematics.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: February 9, 2021
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Martin Guay, Dominik Tobias Borer, Ahmet Cengiz Öztireli, Robert W. Sumner, Jakob Joachim Buhmann
  • Publication number: 20210012512
    Abstract: Some implementations of the disclosure are directed to capturing facial training data for one or more subjects, the captured facial training data including each of the one or more subject's facial skin geometry tracked over a plurality of times and the subject's corresponding jaw poses for each of those plurality of times; and using the captured facial training data to create a model that provides a mapping from skin motion to jaw motion. Additional implementations of the disclosure are directed to determining a facial skin geometry of a subject; using a model that provides a mapping from skin motion to jaw motion to predict a motion of the subject's jaw from a rest pose given the facial skin geometry; and determining a jaw pose of the subject using the predicted motion of the subject's jaw.
    Type: Application
    Filed: July 12, 2019
    Publication date: January 14, 2021
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Gaspard Zoss
  • Patent number: 10887581
    Abstract: The present disclosure relates to techniques for reconstructing an object in three dimensions that is captured in a set of two-dimensional images. The object is reconstructed in three dimensions by computing depth values for edges of the object in the set of two-dimensional images. The set of two-dimensional images may be samples of a light field surrounding the object. The depth values may be computed by exploiting local gradient information in the set of two-dimensional images. After computing the depth values for the edges, depth values between the edges may be determined by identifying types of the edges (e.g., a texture edge, a silhouette edge, or other type of edge). Then, the depth values from the set of two-dimensional images may be aggregated in a three-dimensional space using a voting scheme, allowing the reconstruction of the object in three dimensions.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: January 5, 2021
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Kaan Yücer, Changil Kim, Alexander Sorkine-Hornung, Olga Sorkine-Hornung
  • Patent number: 10818080
    Abstract: According to one implementation, a system includes a computing platform having a hardware processor and a system memory storing a software code including multiple artificial neural networks (ANNs). The hardware processor executes the software code to partition a multi-dimensional input vector into a first vector data and a second vector data, and to transform the second vector data using a first piecewise-polynomial transformation parameterized by one of the ANNs, based on the first vector data, to produce a transformed second vector data. The hardware processor further executes the software code to transform the first vector data using a second piecewise-polynomial transformation parameterized by another of the ANNs, based on the transformed second vector data, to produce a transformed first vector data, and to determine a multi-dimensional output vector based on an output from the plurality of ANNs.
    Type: Grant
    Filed: October 11, 2018
    Date of Patent: October 27, 2020
    Assignees: Disney Enterprises, Inc., ETH Zürich (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Thomas Muller, Brian McWilliams, Fabrice Pierre Armand Rousselle, Jan Novak
  • Patent number: 10796414
    Abstract: Supervised machine learning using convolutional neural network (CNN) is applied to denoising images rendered by MC path tracing. The input image data may include pixel color and its variance, as well as a set of auxiliary buffers that encode scene information (e.g., surface normal, albedo, depth, and their corresponding variances). In some embodiments, a CNN directly predicts the final denoised pixel value as a highly non-linear combination of the input features. In some other embodiments, a kernel-prediction neural network uses a CNN to estimate the local weighting kernels, which are used to compute each denoised pixel from its neighbors. In some embodiments, the input image can be decomposed into diffuse and specular components. The diffuse and specular components are then independently preprocessed, filtered, and postprocessed, before recombining them to obtain a final denoised image.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: October 6, 2020
    Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Thijs Vogels, Jan Novák, Fabrice Rousselle, Brian McWilliams
  • Patent number: 10580194
    Abstract: Systems, methods and articles of manufacture for rendering three-dimensional virtual environments using reversible jumps are disclosed herein. In one embodiment, mappings from random numbers to light paths are modeled as an explicit iterative random walk. Inverses of path construction techniques are employed to turn light transport paths back into the random numbers that produced them. In particular, such inverses may be used to extend the Multiplexed Metropolis Light Transport (MMLT) technique to perform path-invariant perturbations that produce a new path sample using a different path construction technique but preserve the path's geometry.
    Type: Grant
    Filed: November 9, 2017
    Date of Patent: March 3, 2020
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Jan Novák, Wenzel A. Jakob, Wojciech Jarosz, Benedikt Martin Bitterli
  • Patent number: 10580165
    Abstract: The present disclosure relates to an apparatus, system and method for processing transmedia content data. More specifically, the disclosure provides for identifying and inserting one item of media content within another item of media content, e.g. inserting a video within a video, such that the first item of media content appears as part of the second item. The invention involves analysing a first visual media item to identify one or more spatial locations to insert the second visual media item within the image data of the first visual media item, detecting characteristics of the one or more identified spatial locations, transforming the second visual media item according to the detected characteristics and combining the first visual media item and second visual media item by inserting the transformed second visual media item into the first visual media item at the one or more identified spatial locations.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: March 3, 2020
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Alex Sorkine-Hornung, Simone Meier, Jean-Charles Bazin, Sasha Schriber, Markus Gross, Oliver Wang
  • Patent number: 10547871
    Abstract: The disclosure provides an approach for edge-aware spatio-temporal filtering. In one embodiment, a filtering application receives as input a guiding video sequence and video sequence(s) from additional channel(s). The filtering application estimates a sparse optical flow from the guiding video sequence using a novel binary feature descriptor integrated into the Coarse-to-fine PatchMatch method to compute a quasi-dense nearest neighbor field. The filtering application then performs spatial edge-aware filtering of the sparse optical flow (to obtain a dense flow) and the additional channel(s), using an efficient evaluation of the permeability filter with only two scan-line passes per iteration. Further, the filtering application performs temporal filtering of the optical flow using an infinite impulse response filter that only requires one filter state updated based on new guiding video sequence video frames.
    Type: Grant
    Filed: May 5, 2017
    Date of Patent: January 28, 2020
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Tunc Ozan Aydin, Florian Michael Scheidegger, Michael Stefano Fritz Schaffner, Lukas Cavigelli, Luca Benini, Aljosa Aleksej Andrej Smolic
  • Publication number: 20200027198
    Abstract: Supervised machine learning using convolutional neural network (CNN) is applied to denoising images rendered by MC path tracing. The input image data may include pixel color and its variance, as well as a set of auxiliary buffers that encode scene information (e.g., surface normal, albedo, depth, and their corresponding variances). In some embodiments, a CNN directly predicts the final denoised pixel value as a highly non-linear combination of the input features. In some other embodiments, a kernel-prediction neural network uses a CNN to estimate the local weighting kernels, which are used to compute each denoised pixel from its neighbors. In some embodiments, the input image can be decomposed into diffuse and specular components. The diffuse and specular components are then independently preprocessed, filtered, and postprocessed, before recombining them to obtain a final denoised image.
    Type: Application
    Filed: September 26, 2019
    Publication date: January 23, 2020
    Applicants: Disney Enterprises, Inc., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Thijs Vogels, Jan Novák, Fabrice Rousselle, Brian McWilliams
  • Patent number: 10483004
    Abstract: A system and method for non-invasive reconstruction of an entire object-specific or person-specific teeth row from just a set of photographs of the mouth region of an object (e.g., an animal) or a person (e.g., an actor or a patient) are provided. A teeth statistic model defining individual teeth in a teeth row can be developed. The teeth statistical model can jointly describe shape and pose variations per tooth, and as well as placement of the individual teeth in the teeth row. In some embodiments, the teeth statistic model can be trained using teeth information from 3D scan data of different sample subjects. The 3D scan data can be used to establish a database of teeth of various shapes and poses. Geometry information regarding the individual teeth can be extracted from the 3D scan data. The teeth statistic model can be trained using the geometry information regarding the individual teeth.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: November 19, 2019
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Chenglei Wu, Derek Bradley, Thabo Beeler, Markus Gross