Patents by Inventor Stephan Marcel MANDT
Stephan Marcel MANDT has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11544606Abstract: Systems and methods for compressing target content are disclosed. In one embodiment, a system may include non-transient electronic storage and one or more physical computer processors. The one or more physical computer processors may be configured by machine-readable instructions to obtain the target content comprising one or more frames, wherein a given frame comprises one or more features. The one or more physical computer processors may be configured by machine-readable instructions to obtain a conditioned network. The one or more physical computer processors may be configured by machine-readable instructions to generate decoded target content by applying the conditioned network to the target content.Type: GrantFiled: January 22, 2019Date of Patent: January 3, 2023Assignee: Disney Enterprises, Inc.Inventors: Stephan Marcel Mandt, Christopher Schoers, Jun Han, Salvator D. Lombardo
-
Patent number: 11403531Abstract: The disclosure provides an approach for learning latent representations of data using factorized variational autoencoders (FVAEs). The FVAE framework builds a hierarchical Bayesian matrix factorization model on top of a variational autoencoder (VAE) by learning a VAE that has a factorized representation so as to compress the embedding space and enhance generalization and interpretability. In one embodiment, an FVAE application takes as input training data comprising observations of objects, and the FVAE application learns a latent representation of such data. In order to learn the latent representation, the FVAE application is configured to use a probabilistic VAE to jointly learn a latent representation of each of the objects and a corresponding factorization across time and identity.Type: GrantFiled: July 19, 2017Date of Patent: August 2, 2022Assignee: Disney Enterprises, Inc.Inventors: G. Peter K. Carr, Zhiwei Deng, Rajitha D. B Navarathna, Yisong Yue, Stephan Marcel Mandt
-
Patent number: 11335034Abstract: Systems and methods for predicting a target set of pixels are disclosed. In one embodiment, a method may include obtaining target content. The target content may include a target set of pixels to be predicted. The method may also include convolving the target set of pixels to generate an estimated set of pixels. The method may include matching a second set of pixels in the target content to the target set of pixels. The second set of pixels may be within a distance from the target set of pixels. The method may include refining the estimated set of pixels to generate a refined set of pixels using a second set of pixels in the target content.Type: GrantFiled: January 16, 2019Date of Patent: May 17, 2022Assignee: Disney Enterprises, Inc.Inventors: Christopher Schroers, Erika Doggett, Stephan Marcel Mandt, Jared McPhillen, Scott Labrozzi, Romann Weber, Mauro Bamert
-
Patent number: 11238341Abstract: Embodiments include applying neural network technologies to encoding/decoding technologies by training and encoder model and a decoder model using a neural network. Neural network training is used to tune a neural network parameter for the encoder model and a neural network parameter for the decoder model that approximates an objective function. The common objective function may specify a minimized reconstruction error to be achieved by the encoder model and the decoder model when reconstructing (encoding then decoding) training data. The common objective function also specifies for the encoder and decoder models, a variable f representing static aspects of the training data and a set of variables z1:T representing dynamic aspects of the training data. During runtime, the trained encoder and decoder models are implemented by encoder and decoder machines to encode and decoder runtime sequences having a higher compression rate and a lower reconstruction error than in prior approaches.Type: GrantFiled: June 29, 2018Date of Patent: February 1, 2022Assignee: Disney Enterprises, Inc.Inventors: Stephan Marcel Mandt, Yingzhen Li
-
Patent number: 11205121Abstract: Embodiments include applying neural network technologies to encoding/decoding technologies by training and encoder model and a decoder model using a neural network. Neural network training is used to tune a neural network parameter for the encoder model and a neural network parameter for the decoder model that approximates an objective function. The common objective function may specify a minimized reconstruction error to be achieved by the encoder model and the decoder model when reconstructing (encoding then decoding) training data. The common objective function also specifies for the encoder and decoder models, a variable f representing static aspects of the training data and a set of variables z1:T representing dynamic aspects of the training data. During runtime, the trained encoder and decoder models are implemented by encoder and decoder machines to encode and decoder runtime sequences having a higher compression rate and a lower reconstruction error than in prior approaches.Type: GrantFiled: June 20, 2018Date of Patent: December 21, 2021Assignee: Disney Enterprises, Inc.Inventors: Stephan Marcel Mandt, Yingzhen Li
-
Patent number: 11068658Abstract: Systems, methods, and articles of manufacture to perform an operation comprising deriving, based on a corpus of electronic text, a machine learning data model that associates words with corresponding usage contexts over a window of time, according to a diffusion process, wherein the machine learning data model comprises a plurality of skip-gram models, wherein each skip-gram model comprises a word embedding vector and a context embedding vector for a respective time step associated with the respective skip-gram model, generating a smoothed model by applying a variational inference operation over the machine learning data model, and identifying, based on the smoothed model and the corpus of electronic text, a change in a semantic use of a word over at least a portion of the window of time.Type: GrantFiled: December 1, 2017Date of Patent: July 20, 2021Assignee: Disney Enterprises, Inc.Inventors: Stephan Marcel Mandt, Robert Bamler
-
Patent number: 10997476Abstract: There are provided systems and methods for performing automated content evaluation. In one implementation, the system includes a hardware processor and a system memory storing a software code including a predictive model trained based on an audience response to training content. The hardware processor executes the software code to receive images, each image including facial landmarks of an audience member viewing the content during its duration, and for each image, transforms the facial landmarks to a lower dimensional facial representation, resulting in multiple lower dimensional facial representations of each audience member. For each of a subset of the lower dimensional facial representations of each audience member, the software code utilizes the predictive model to predict one or more responses to the content, resulting in multiple predictions for each audience member, and classifies one or more time segment(s) in the duration of the content based on an aggregate of the predictions.Type: GrantFiled: May 8, 2019Date of Patent: May 4, 2021Assignee: Disney Enterprises, Inc.Inventors: Salvator D. Lombardo, Cristina Segalin, Lei Chen, Rajitha D. Navarathna, Stephan Marcel Mandt
-
Publication number: 20200226797Abstract: Systems and methods for predicting a target set of pixels are disclosed. In one embodiment, a method may include obtaining target content. The target content may include a target set of pixels to be predicted. The method may also include convolving the target set of pixels to generate an estimated set of pixels. The method may include matching a second set of pixels in the target content to the target set of pixels. The second set of pixels may be within a distance from the target set of pixels. The method may include refining the estimated set of pixels to generate a refined set of pixels using a second set of pixels in the target content.Type: ApplicationFiled: January 16, 2019Publication date: July 16, 2020Applicant: Disney Enterprises, Inc.Inventors: Christopher Schroers, Erika Doggett, Stephan Marcel Mandt, Jared McPhillen, Scott Labrozzi, Romann Weber, Mauro Bamert
-
Publication number: 20200151524Abstract: There are provided systems and methods for performing automated content evaluation. In one implementation, the system includes a hardware processor and a system memory storing a software code including a predictive model trained based on an audience response to training content. The hardware processor executes the software code to receive images, each image including facial landmarks of an audience member viewing the content during its duration, and for each image, transforms the facial landmarks to a lower dimensional facial representation, resulting in multiple lower dimensional facial representations of each audience member. For each of a subset of the lower dimensional facial representations of each audience member, the software code utilizes the predictive model to predict one or more responses to the content, resulting in multiple predictions for each audience member, and classifies one or more time segment(s) in the duration of the content based on an aggregate of the predictions.Type: ApplicationFiled: May 8, 2019Publication date: May 14, 2020Inventors: Salvator D. Lombardo, Cristina Segalin, Lei Chen, Rajitha D. Navarathna, Stephan Marcel Mandt
-
Publication number: 20200090069Abstract: Systems and methods for compressing target content are disclosed. In one embodiment, a system may include non-transient electronic storage and one or more physical computer processors. The one or more physical computer processors may be configured by machine-readable instructions to obtain the target content comprising one or more frames, wherein a given frame comprises one or more features. The one or more physical computer processors may be configured by machine-readable instructions to obtain a conditioned network. The one or more physical computer processors may be configured by machine-readable instructions to generate decoded target content by applying the conditioned network to the target content.Type: ApplicationFiled: January 22, 2019Publication date: March 19, 2020Applicant: Disney Enterprises, Inc.Inventors: Stephan Marcel Mandt, Christopher Schoers, Jun Han, Salvator D. Lombardo
-
Publication number: 20190393903Abstract: Embodiments include applying neural network technologies to encoding/decoding technologies by training and encoder model and a decoder model using a neural network. Neural network training is used to tune a neural network parameter for the encoder model and a neural network parameter for the decoder model that approximates an objective function. The common objective function may specify a minimized reconstruction error to be achieved by the encoder model and the decoder model when reconstructing (encoding then decoding) training data. The common objective function also specifies for the encoder and decoder models, a variable f representing static aspects of the training data and a set of variables z1:T representing dynamic aspects of the training data. During runtime, the trained encoder and decoder models are implemented by encoder and decoder machines to encode and decoder runtime sequences having a higher compression rate and a lower reconstruction error than in prior approaches.Type: ApplicationFiled: June 29, 2018Publication date: December 26, 2019Inventors: Stephan Marcel MANDT, Yingzhen LI
-
Publication number: 20190392302Abstract: Embodiments include applying neural network technologies to encoding/decoding technologies by training and encoder model and a decoder model using a neural network. Neural network training is used to tune a neural network parameter for the encoder model and a neural network parameter for the decoder model that approximates an objective function. The common objective function may specify a minimized reconstruction error to be achieved by the encoder model and the decoder model when reconstructing (encoding then decoding) training data. The common objective function also specifies for the encoder and decoder models, a variable f representing static aspects of the training data and a set of variables z1:T representing dynamic aspects of the training data. During runtime, the trained encoder and decoder models are implemented by encoder and decoder machines to encode and decoder runtime sequences having a higher compression rate and a lower reconstruction error than in prior approaches.Type: ApplicationFiled: June 20, 2018Publication date: December 26, 2019Inventors: Stephan Marcel MANDT, Yingzhen LI
-
Publication number: 20190026631Abstract: The disclosure provides an approach for learning latent representations of data using factorized variational autoencoders (FVAEs). The FVAE framework builds a hierarchical Bayesian matrix factorization model on top of a variational autoencoder (VAE) by learning a VAE that has a factorized representation so as to compress the embedding space and enhance generalization and interpretability. In one embodiment, an FVAE application takes as input training data comprising observations of objects, and the FVAE application learns a latent representation of such data. In order to learn the latent representation, the FVAE application is configured to use a probabilistic VAE to jointly learn a latent representation of each of the objects and a corresponding factorization across time and identity.Type: ApplicationFiled: July 19, 2017Publication date: January 24, 2019Inventors: G. Peter K. CARR, Zhiwei DENG, Rajitha D.B NAVARATHNA, Yisong YUE, Stephan Marcel MANDT
-
Publication number: 20180157644Abstract: Systems, methods, and articles of manufacture to perform an operation comprising deriving, based on a corpus of electronic text, a machine learning data model that associates words with corresponding usage contexts over a window of time, according to a diffusion process, wherein the machine learning data model comprises a plurality of skip-gram models, wherein each skip-gram model comprises a word embedding vector and a context embedding vector for a respective time step associated with the respective skip-gram model, generating a smoothed model by applying a variational inference operation over the machine learning data model, and identifying, based on the smoothed model and the corpus of electronic text, a change in a semantic use of a word over at least a portion of the window of time.Type: ApplicationFiled: December 1, 2017Publication date: June 7, 2018Inventors: Stephan Marcel MANDT, Robert BAMLER