Patents by Inventor Markus Gross

Markus Gross has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10623709
    Abstract: A video processing system includes a computing platform having a hardware processor and a memory storing a software code including a convolutional neural network (CNN). The hardware processor executes the software code to receive video data including a key video frame in color and a video sequence in gray scale, determine a first estimated colorization for each frame of the video sequence except the key video frame based on a colorization of a previous frame, and determine a second estimated colorization for each frame of the video sequence except the key video frame based on the key video frame in color. For each frame of the video sequence except the key video frame, the software code further blends the first estimated colorization with the second estimated colorization using a color fusion stage of the CNN to produce a colorized video sequence corresponding to the video sequence in gray scale.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: April 14, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Christopher Schroers, Simone Meyer, Victor Cornillere, Markus Gross, Abdelaziz Djelouah
  • Patent number: 10586399
    Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing (live-action/animation/cinematic) virtual reality (VR) experiences or content. A script can be parsed to identify one or more elements in a script, and a VR representation of the one or more elements can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner along with the VR representation. The user may edit the VR representation, and the visual and temporal presentation of the script can be commensurately represented. The script may be analyzed for consistency and/or cohesiveness in the context of the VR representation or experience. A preview of the VR experience or content can be generated from the script and/or the VR representation.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: March 10, 2020
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Sasha Anna Schriber, Isa Simo, Merada Richter, Mubbasir Kapadia, Markus Gross
  • Publication number: 20200077065
    Abstract: A video processing system includes a computing platform having a hardware processor and a memory storing a software code including a convolutional neural network (CNN). The hardware processor executes the software code to receive video data including a key video frame in color and a video sequence in gray scale, determine a first estimated colorization for each frame of the video sequence except the key video frame based on a colorization of a previous frame, and determine a second estimated colorization for each frame of the video sequence except the key video frame based on the key video frame in color. For each frame of the video sequence except the key video frame, the software code further blends the first estimated colorization with the second estimated colorization using a color fusion stage of the CNN to produce a colorized video sequence corresponding to the video sequence in gray scale.
    Type: Application
    Filed: August 31, 2018
    Publication date: March 5, 2020
    Inventors: Christopher Schroers, Simone Meyer, Victor Cornillere, Markus Gross, Abdelaziz Djelouah
  • Patent number: 10580165
    Abstract: The present disclosure relates to an apparatus, system and method for processing transmedia content data. More specifically, the disclosure provides for identifying and inserting one item of media content within another item of media content, e.g. inserting a video within a video, such that the first item of media content appears as part of the second item. The invention involves analysing a first visual media item to identify one or more spatial locations to insert the second visual media item within the image data of the first visual media item, detecting characteristics of the one or more identified spatial locations, transforming the second visual media item according to the detected characteristics and combining the first visual media item and second visual media item by inserting the transformed second visual media item into the first visual media item at the one or more identified spatial locations.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: March 3, 2020
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Alex Sorkine-Hornung, Simone Meier, Jean-Charles Bazin, Sasha Schriber, Markus Gross, Oliver Wang
  • Patent number: 10491856
    Abstract: According to one implementation, a video processing system includes a computing platform having a hardware processor and a system memory storing a frame interpolation software code, the frame interpolation software code including a convolutional neural network (CNN) trained using a loss function having an image loss term summed with a phase loss term. The hardware processor executes the frame interpolation software code to receive first and second consecutive video frames including respective first and second images, and to decompose the first and second images to produce respective first and second image decompositions. The hardware processor further executes the frame interpolation software code to use the CNN to determine an intermediate image decomposition corresponding to an interpolated video frame for insertion between the first and second video frames based on the first and second image decompositions, and to synthesize the interpolated video frame based on the intermediate image decomposition.
    Type: Grant
    Filed: May 8, 2018
    Date of Patent: November 26, 2019
    Assignees: Disney Enterprises, Inc., ETH Zurich
    Inventors: Christopher Schroers, Simone Meyer, Abdelaziz Djelouah, Alexander Sorkine Hornung, Brian McWilliams, Markus Gross
  • Patent number: 10483004
    Abstract: A system and method for non-invasive reconstruction of an entire object-specific or person-specific teeth row from just a set of photographs of the mouth region of an object (e.g., an animal) or a person (e.g., an actor or a patient) are provided. A teeth statistic model defining individual teeth in a teeth row can be developed. The teeth statistical model can jointly describe shape and pose variations per tooth, and as well as placement of the individual teeth in the teeth row. In some embodiments, the teeth statistic model can be trained using teeth information from 3D scan data of different sample subjects. The 3D scan data can be used to establish a database of teeth of various shapes and poses. Geometry information regarding the individual teeth can be extracted from the 3D scan data. The teeth statistic model can be trained using the geometry information regarding the individual teeth.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: November 19, 2019
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Chenglei Wu, Derek Bradley, Thabo Beeler, Markus Gross
  • Patent number: 10466669
    Abstract: An automated machine wherein messages exchanged between components of the machine are forwarded to at least one component at a first time point that temporarily stores them and from that component to a central unit at a second time point is disclosed. Said first time point preceding said second time point within the framework of a cycle time of the automated machine. The time points are determined relative to that cycle time and provided to respective components in conjunction with a startup of the automated machine. The component temporarily storing messages forwards them in the manner of a layer-3 switch. This message forwarding is driven by the respective time points, rather than event-driven, can operate seamlessly in existing systems wherein communications are time-slot controlled, and decouples the components so that real-time communications within the machine can be optimized.
    Type: Grant
    Filed: November 6, 2015
    Date of Patent: November 5, 2019
    Assignee: Siemens Aktiengesellschaft
    Inventors: Bernd Dotterweich, Markus Gross, Oswald Käsdorf, Michael Von Der Bank
  • Publication number: 20190289257
    Abstract: According to one implementation, a video processing system includes a computing platform having a hardware processor and a system memory storing a frame interpolation software code, the frame interpolation software code including a convolutional neural network (CNN) trained using a loss function having an image loss term summed with a phase loss term. The hardware processor executes the frame interpolation software code to receive first and second consecutive video frames including respective first and second images, and to decompose the first and second images to produce respective first and second image decompositions. The hardware processor further executes the frame interpolation software code to use the CNN to determine an intermediate image decomposition corresponding to an interpolated video frame for insertion between the first and second video frames based on the first and second image decompositions, and to synthesize the interpolated video frame based on the intermediate image decomposition.
    Type: Application
    Filed: May 8, 2018
    Publication date: September 19, 2019
    Inventors: Christopher Schroers, Simone Meyer, Abdelaziz Djelouah, Alexander Sorkine Hornung, Brian McWilliams, Markus Gross
  • Patent number: 10403404
    Abstract: A computer-implemented method is provided for physical face cloning to generate a synthetic skin. Rather than attempt to reproduce the mechanical properties of biological tissue, an output-oriented approach is utilized that models the synthetic skin as an elastic material with isotropic and homogeneous properties (e.g., silicone rubber). The method includes capturing a plurality of expressive poses from a human subject and generating a computational model based on one or more material parameters of a material. In one embodiment, the computational model is a compressible neo-Hookean material model configured to simulate deformation behavior of the synthetic skin. The method further includes optimizing a shape geometry of the synthetic skin based on the computational model and the captured expressive poses. An optimization process is provided that varies the thickness of the synthetic skin based on a minimization of an elastic energy with respect to rest state positions of the synthetic skin.
    Type: Grant
    Filed: July 13, 2015
    Date of Patent: September 3, 2019
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessiche Technische Hochschule Zurich)
    Inventors: Bernd Bickel, Peter Kaufmann, Bernhard Thomaszewski, Derek Edward Bradley, Philip John Jackson, Stephen R. Marschner, Wojciech Matusik, Markus Gross, Thabo Dominik Beeler
  • Publication number: 20190261049
    Abstract: Novel systems and methods are described for creating, compressing, and distributing video or image content graded for a plurality of displays with different dynamic ranges. In implementations, the created content is “continuous dynamic range” (CDR) content—a novel representation of pixel-luminance as a function of display dynamic range. The creation of the CDR content includes grading a source content for a minimum dynamic range and a maximum dynamic range, and defining a luminance of each pixel of an image or video frame of the source content as a continuous function between the minimum and the maximum dynamic ranges. In additional implementations, a novel graphical user interface for creating and editing the CDR content is described.
    Type: Application
    Filed: May 2, 2019
    Publication date: August 22, 2019
    Inventors: Aljoscha Smolic, Alexandre Chapiro, Simone Croci, Tunc Ozan Aydin, Nikolce Stefanoski, Markus Gross
  • Patent number: 10375200
    Abstract: A recommender engine is configured to access memory and surface transmedia content items; and/or linked transmedia content subsets; and/or one or more identifications of identified users; and/or content items of the plurality of transmedia content items associated with at least one identified user. The surfaced items are presented for selection by the given user via the transmedia content linking engine as one or more user-selected transmedia content items.
    Type: Grant
    Filed: September 26, 2016
    Date of Patent: August 6, 2019
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Barbara Solenthaler, Tanja Kaeser, Severin Klingler, Adriano Galati, Markus Gross
  • Patent number: 10349127
    Abstract: Novel systems and methods are described for creating, compressing, and distributing video or image content graded for a plurality of displays with different dynamic ranges. In implementations, the created content is “continuous dynamic range” (CDR) content—a novel representation of pixel-luminance as a function of display dynamic range. The creation of the CDR content includes grading a source content for a minimum dynamic range and a maximum dynamic range, and defining a luminance of each pixel of an image or video frame of the source content as a continuous function between the minimum and the maximum dynamic ranges. In additional implementations, a novel graphical user interface for creating and editing the CDR content is described.
    Type: Grant
    Filed: September 22, 2015
    Date of Patent: July 9, 2019
    Assignees: Disney Enterprises, Inc., Eidgenoessische Technische Hochschule Zurich (ETH Zurich)
    Inventors: Aljoscha Smolic, Alexandre Chapiro, Simone Croci, Tunc Ozan Aydin, Nikolce Stefanoski, Markus Gross
  • Patent number: 10331726
    Abstract: A method is provided for rendering a representation of and interacting with transmedia content on an electronic device. Transmedia content data is received at the electronic device. The transmedia content data comprises: a plurality of transmedia content data items; linking data which define time-ordered content links between the plurality of transmedia content data items, whereby the plurality of transmedia content data items are arranged into linked transmedia content subsets comprising different groups of the transmedia content data items and different content links therebetween; a visualization model of the transmedia content data; and a hierarchical structure of the linked transmedia content subsets and clusters of linked transmedia content subsets.
    Type: Grant
    Filed: September 26, 2016
    Date of Patent: June 25, 2019
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Rebekkah Laeuchli, Max Grosse, Maria Cabral, Markus Gross, Sasha Schriber, Isa Simo
  • Patent number: 10325346
    Abstract: An image processor inputs a first image and outputs a downscaled second image by upscaling the second image to a third image, wherein the third image is substantially the same size as the first image size with a third resolution, associating pixels in the second image with a corresponding group of pixels from the third set of pixels, sampling a first image area at a first location of the first set of pixels to generate a first image sample, sampling a second image area of the third set of pixels to generate a second image sample, measuring similarity between the image areas, generating a perceptual image value, recursively adjusting values of third set of pixels until the image perception value matches a perceptual standard value, and adjusting pixel values in the second image to a representative pixel value of each of the corresponding group of pixels.
    Type: Grant
    Filed: July 25, 2016
    Date of Patent: June 18, 2019
    Assignee: ETH-Zurich
    Inventors: Ahmet Cengiz Öztireli, Markus Gross
  • Publication number: 20190155829
    Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing scripts. A script can be parsed to identify one or more elements in a script, and various visual representations of the one or more elements and/or a scene characterized in the script can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner. Information parsed from the script can be stored in basic information elements, and used to create a knowledge bases.
    Type: Application
    Filed: October 9, 2018
    Publication date: May 23, 2019
    Applicant: Disney Enterprises, Inc.
    Inventors: Sasha Anna Schriber, Isabel Simo, Justine Fung, Daniel Inversini, Carolina Ferrari, Max Grosse, Markus Gross
  • Patent number: 10297065
    Abstract: Methods, systems, and computer-readable memory are provided for determining time-varying anatomical and physiological tissue characteristics of an animation rig. For example, shape and material properties are defined for a plurality of sample configurations of the animation rig. The shape and material properties are associated with the plurality of sample configurations. An animation of the animation rig is obtained, and one or more configurations of the animation rig are determined for one or more frames of the animation. The determined one or more configurations include shape and material properties, and are determined using one or more sample configurations of the animation rig. A simulation of the animation rig is performed using the determined one or more configurations. Performing the simulation includes computing physical effects for addition to the animation of the animation rig.
    Type: Grant
    Filed: November 9, 2016
    Date of Patent: May 21, 2019
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Yeara Kozlov, Bernhard Thomaszewski, Thabo Beeler, Derek Bradley, Moritz Bächer, Markus Gross
  • Patent number: 10270945
    Abstract: There are provided systems and methods for an interactive synchronization of multiple videos. An example system includes a memory storing a first video and a second video, the first video including first video clips and the second video including second video clips. The system further includes a processor configured to calculate a histogram based on a number of features that are similar between the first video clips and the second video clips, generate a cost matrix based on the histogram, generate a first graph that includes first nodes based on the cost matrix, compute a path through the graph using the nodes, and align the first video with the second video using the path, where the path corresponds to playback speeds for the first video and the second video.
    Type: Grant
    Filed: June 19, 2014
    Date of Patent: April 23, 2019
    Assignee: Disney Enterprises, Inc.
    Inventors: Oliver Wang, Christopher Schroers, Henning Zimmer, Alexander Sorkine Hornung, Markus Gross
  • Publication number: 20190107927
    Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing scripts. A script can be parsed to identify one or more elements in a script, and various visual representations of the one or more elements and/or a scene characterized in the script can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner. Information parsed from the script can be stored in basic information elements, and used to create a knowledge bases.
    Type: Application
    Filed: October 9, 2018
    Publication date: April 11, 2019
    Applicant: Disney Enterprises, Inc.
    Inventors: Sasha Anna Schriber, Rushit Sanghrajka, Wojciech Witon, Isabel Simo, Mubbasir Kapadia, Markus Gross, Daniel Inversini, Max Grosse, Eleftheria Tsipidi
  • Publication number: 20190096094
    Abstract: The present disclosure relates to an apparatus, system and method for processing transmedia content data. More specifically, the disclosure provides for identifying and inserting one item of media content within another item of media content, e.g. inserting a video within a video, such that the first item of media content appears as part of the second item. The invention involves analysing a first visual media item to identify one or more spatial locations to insert the second visual media item within the image data of the first visual media item, detecting characteristics of the one or more identified spatial locations, transforming the second visual media item according to the detected characteristics and combining the first visual media item and second visual media item by inserting the transformed second visual media item into the first visual media item at the one or more identified spatial locations.
    Type: Application
    Filed: September 26, 2017
    Publication date: March 28, 2019
    Applicants: DISNEY ENTERPRISES, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Alex Sorkine-Hornung, Simone Meier, Jean-Charles Bazin, Sasha Schriber, Markus Gross, Oliver Wang
  • Publication number: 20190098370
    Abstract: The invention relates to systems and methods for manipulating non-linearly connected transmedia content, in particular for creating, processing and/or managing non-linearly connected transmedia content and for tracking content creation and attributing transmedia content to one or more creators. Specifically, the invention involves creating a transmedia content data item by a first user and storing the transmedia content data item in a data store, along with a record indicating an association between the first user and the transmedia content data item; creating an ordered group of transmedia content data items by a second user, the ordered group comprising a pointer to the transmedia content data item of the first user; and storing the ordered group and a record associating both the first user and the second user with the ordered group in the data store.
    Type: Application
    Filed: September 26, 2017
    Publication date: March 28, 2019
    Applicants: DISNEY ENTERPRISES, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Rebekkah Laeuchli, Sasha Schriber, Stephan Veen, Markus Gross, Isabel Simo, Max Grosse