Patents by Inventor Stephan Marcel MANDT

Stephan Marcel MANDT has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11544606
    Abstract: Systems and methods for compressing target content are disclosed. In one embodiment, a system may include non-transient electronic storage and one or more physical computer processors. The one or more physical computer processors may be configured by machine-readable instructions to obtain the target content comprising one or more frames, wherein a given frame comprises one or more features. The one or more physical computer processors may be configured by machine-readable instructions to obtain a conditioned network. The one or more physical computer processors may be configured by machine-readable instructions to generate decoded target content by applying the conditioned network to the target content.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: January 3, 2023
    Assignee: Disney Enterprises, Inc.
    Inventors: Stephan Marcel Mandt, Christopher Schoers, Jun Han, Salvator D. Lombardo
  • Patent number: 11403531
    Abstract: The disclosure provides an approach for learning latent representations of data using factorized variational autoencoders (FVAEs). The FVAE framework builds a hierarchical Bayesian matrix factorization model on top of a variational autoencoder (VAE) by learning a VAE that has a factorized representation so as to compress the embedding space and enhance generalization and interpretability. In one embodiment, an FVAE application takes as input training data comprising observations of objects, and the FVAE application learns a latent representation of such data. In order to learn the latent representation, the FVAE application is configured to use a probabilistic VAE to jointly learn a latent representation of each of the objects and a corresponding factorization across time and identity.
    Type: Grant
    Filed: July 19, 2017
    Date of Patent: August 2, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: G. Peter K. Carr, Zhiwei Deng, Rajitha D. B Navarathna, Yisong Yue, Stephan Marcel Mandt
  • Patent number: 11335034
    Abstract: Systems and methods for predicting a target set of pixels are disclosed. In one embodiment, a method may include obtaining target content. The target content may include a target set of pixels to be predicted. The method may also include convolving the target set of pixels to generate an estimated set of pixels. The method may include matching a second set of pixels in the target content to the target set of pixels. The second set of pixels may be within a distance from the target set of pixels. The method may include refining the estimated set of pixels to generate a refined set of pixels using a second set of pixels in the target content.
    Type: Grant
    Filed: January 16, 2019
    Date of Patent: May 17, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: Christopher Schroers, Erika Doggett, Stephan Marcel Mandt, Jared McPhillen, Scott Labrozzi, Romann Weber, Mauro Bamert
  • Patent number: 11238341
    Abstract: Embodiments include applying neural network technologies to encoding/decoding technologies by training and encoder model and a decoder model using a neural network. Neural network training is used to tune a neural network parameter for the encoder model and a neural network parameter for the decoder model that approximates an objective function. The common objective function may specify a minimized reconstruction error to be achieved by the encoder model and the decoder model when reconstructing (encoding then decoding) training data. The common objective function also specifies for the encoder and decoder models, a variable f representing static aspects of the training data and a set of variables z1:T representing dynamic aspects of the training data. During runtime, the trained encoder and decoder models are implemented by encoder and decoder machines to encode and decoder runtime sequences having a higher compression rate and a lower reconstruction error than in prior approaches.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: February 1, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: Stephan Marcel Mandt, Yingzhen Li
  • Patent number: 11205121
    Abstract: Embodiments include applying neural network technologies to encoding/decoding technologies by training and encoder model and a decoder model using a neural network. Neural network training is used to tune a neural network parameter for the encoder model and a neural network parameter for the decoder model that approximates an objective function. The common objective function may specify a minimized reconstruction error to be achieved by the encoder model and the decoder model when reconstructing (encoding then decoding) training data. The common objective function also specifies for the encoder and decoder models, a variable f representing static aspects of the training data and a set of variables z1:T representing dynamic aspects of the training data. During runtime, the trained encoder and decoder models are implemented by encoder and decoder machines to encode and decoder runtime sequences having a higher compression rate and a lower reconstruction error than in prior approaches.
    Type: Grant
    Filed: June 20, 2018
    Date of Patent: December 21, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Stephan Marcel Mandt, Yingzhen Li
  • Patent number: 11068658
    Abstract: Systems, methods, and articles of manufacture to perform an operation comprising deriving, based on a corpus of electronic text, a machine learning data model that associates words with corresponding usage contexts over a window of time, according to a diffusion process, wherein the machine learning data model comprises a plurality of skip-gram models, wherein each skip-gram model comprises a word embedding vector and a context embedding vector for a respective time step associated with the respective skip-gram model, generating a smoothed model by applying a variational inference operation over the machine learning data model, and identifying, based on the smoothed model and the corpus of electronic text, a change in a semantic use of a word over at least a portion of the window of time.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: July 20, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Stephan Marcel Mandt, Robert Bamler
  • Patent number: 10997476
    Abstract: There are provided systems and methods for performing automated content evaluation. In one implementation, the system includes a hardware processor and a system memory storing a software code including a predictive model trained based on an audience response to training content. The hardware processor executes the software code to receive images, each image including facial landmarks of an audience member viewing the content during its duration, and for each image, transforms the facial landmarks to a lower dimensional facial representation, resulting in multiple lower dimensional facial representations of each audience member. For each of a subset of the lower dimensional facial representations of each audience member, the software code utilizes the predictive model to predict one or more responses to the content, resulting in multiple predictions for each audience member, and classifies one or more time segment(s) in the duration of the content based on an aggregate of the predictions.
    Type: Grant
    Filed: May 8, 2019
    Date of Patent: May 4, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Salvator D. Lombardo, Cristina Segalin, Lei Chen, Rajitha D. Navarathna, Stephan Marcel Mandt
  • Publication number: 20200226797
    Abstract: Systems and methods for predicting a target set of pixels are disclosed. In one embodiment, a method may include obtaining target content. The target content may include a target set of pixels to be predicted. The method may also include convolving the target set of pixels to generate an estimated set of pixels. The method may include matching a second set of pixels in the target content to the target set of pixels. The second set of pixels may be within a distance from the target set of pixels. The method may include refining the estimated set of pixels to generate a refined set of pixels using a second set of pixels in the target content.
    Type: Application
    Filed: January 16, 2019
    Publication date: July 16, 2020
    Applicant: Disney Enterprises, Inc.
    Inventors: Christopher Schroers, Erika Doggett, Stephan Marcel Mandt, Jared McPhillen, Scott Labrozzi, Romann Weber, Mauro Bamert
  • Publication number: 20200151524
    Abstract: There are provided systems and methods for performing automated content evaluation. In one implementation, the system includes a hardware processor and a system memory storing a software code including a predictive model trained based on an audience response to training content. The hardware processor executes the software code to receive images, each image including facial landmarks of an audience member viewing the content during its duration, and for each image, transforms the facial landmarks to a lower dimensional facial representation, resulting in multiple lower dimensional facial representations of each audience member. For each of a subset of the lower dimensional facial representations of each audience member, the software code utilizes the predictive model to predict one or more responses to the content, resulting in multiple predictions for each audience member, and classifies one or more time segment(s) in the duration of the content based on an aggregate of the predictions.
    Type: Application
    Filed: May 8, 2019
    Publication date: May 14, 2020
    Inventors: Salvator D. Lombardo, Cristina Segalin, Lei Chen, Rajitha D. Navarathna, Stephan Marcel Mandt
  • Publication number: 20200090069
    Abstract: Systems and methods for compressing target content are disclosed. In one embodiment, a system may include non-transient electronic storage and one or more physical computer processors. The one or more physical computer processors may be configured by machine-readable instructions to obtain the target content comprising one or more frames, wherein a given frame comprises one or more features. The one or more physical computer processors may be configured by machine-readable instructions to obtain a conditioned network. The one or more physical computer processors may be configured by machine-readable instructions to generate decoded target content by applying the conditioned network to the target content.
    Type: Application
    Filed: January 22, 2019
    Publication date: March 19, 2020
    Applicant: Disney Enterprises, Inc.
    Inventors: Stephan Marcel Mandt, Christopher Schoers, Jun Han, Salvator D. Lombardo
  • Publication number: 20190393903
    Abstract: Embodiments include applying neural network technologies to encoding/decoding technologies by training and encoder model and a decoder model using a neural network. Neural network training is used to tune a neural network parameter for the encoder model and a neural network parameter for the decoder model that approximates an objective function. The common objective function may specify a minimized reconstruction error to be achieved by the encoder model and the decoder model when reconstructing (encoding then decoding) training data. The common objective function also specifies for the encoder and decoder models, a variable f representing static aspects of the training data and a set of variables z1:T representing dynamic aspects of the training data. During runtime, the trained encoder and decoder models are implemented by encoder and decoder machines to encode and decoder runtime sequences having a higher compression rate and a lower reconstruction error than in prior approaches.
    Type: Application
    Filed: June 29, 2018
    Publication date: December 26, 2019
    Inventors: Stephan Marcel MANDT, Yingzhen LI
  • Publication number: 20190392302
    Abstract: Embodiments include applying neural network technologies to encoding/decoding technologies by training and encoder model and a decoder model using a neural network. Neural network training is used to tune a neural network parameter for the encoder model and a neural network parameter for the decoder model that approximates an objective function. The common objective function may specify a minimized reconstruction error to be achieved by the encoder model and the decoder model when reconstructing (encoding then decoding) training data. The common objective function also specifies for the encoder and decoder models, a variable f representing static aspects of the training data and a set of variables z1:T representing dynamic aspects of the training data. During runtime, the trained encoder and decoder models are implemented by encoder and decoder machines to encode and decoder runtime sequences having a higher compression rate and a lower reconstruction error than in prior approaches.
    Type: Application
    Filed: June 20, 2018
    Publication date: December 26, 2019
    Inventors: Stephan Marcel MANDT, Yingzhen LI
  • Publication number: 20190026631
    Abstract: The disclosure provides an approach for learning latent representations of data using factorized variational autoencoders (FVAEs). The FVAE framework builds a hierarchical Bayesian matrix factorization model on top of a variational autoencoder (VAE) by learning a VAE that has a factorized representation so as to compress the embedding space and enhance generalization and interpretability. In one embodiment, an FVAE application takes as input training data comprising observations of objects, and the FVAE application learns a latent representation of such data. In order to learn the latent representation, the FVAE application is configured to use a probabilistic VAE to jointly learn a latent representation of each of the objects and a corresponding factorization across time and identity.
    Type: Application
    Filed: July 19, 2017
    Publication date: January 24, 2019
    Inventors: G. Peter K. CARR, Zhiwei DENG, Rajitha D.B NAVARATHNA, Yisong YUE, Stephan Marcel MANDT
  • Publication number: 20180157644
    Abstract: Systems, methods, and articles of manufacture to perform an operation comprising deriving, based on a corpus of electronic text, a machine learning data model that associates words with corresponding usage contexts over a window of time, according to a diffusion process, wherein the machine learning data model comprises a plurality of skip-gram models, wherein each skip-gram model comprises a word embedding vector and a context embedding vector for a respective time step associated with the respective skip-gram model, generating a smoothed model by applying a variational inference operation over the machine learning data model, and identifying, based on the smoothed model and the corpus of electronic text, a change in a semantic use of a word over at least a portion of the window of time.
    Type: Application
    Filed: December 1, 2017
    Publication date: June 7, 2018
    Inventors: Stephan Marcel MANDT, Robert BAMLER