Patents by Inventor Yingzhen LI

Yingzhen LI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230409674
    Abstract: A computer-implemented method includes receiving an incorrect prediction output by a trained machine learning model, which has been trained using training data items. The method includes identifying a training data item used to train the model that is a cause of the incorrect prediction, by determining an impact on performance of the trained machine learning model associated with removing the training data item from the plurality of training data. The trained model can then be updated to remove the effect of the identified training data item, allowing the model to be automatically corrected in view of poor quality training data.
    Type: Application
    Filed: September 29, 2022
    Publication date: December 21, 2023
    Inventors: Ryutaro TANNO, Aditya NORI, Melanie Fernandez PRADIER, Yingzhen LI
  • Publication number: 20230325667
    Abstract: A method comprising: receiving observed data points each comprising a vector of feature values, wherein for each data point, the respective feature values are values of different features of a feature vector. Each observed data point represents a respective observation of a ground truth as observed in the form of the respective values of the feature vector. The method further comprises learning parameters of a machine-learning model based on the observed data points. The machine-learning model comprises one or more statistical models arranged to model a causal relationship between the feature vector and a latent vector, a classification, and a manipulation vector. The manipulation vector represents an effect of potential manipulations occurring between the ground truth and the observation thereof as observed via the feature vector. The learning comprises learning parameters of the one or more statistical models to map between the feature vector, latent vector, classification and manipulation vector.
    Type: Application
    Filed: June 12, 2023
    Publication date: October 12, 2023
    Inventors: Cheng ZHANG, Yingzhen LI
  • Patent number: 11715004
    Abstract: A method comprising: receiving observed data points each comprising a vector of feature values, wherein for each data point, the respective feature values are values of different features of a feature vector. Each observed data point represents a respective observation of a ground truth as observed in the form of the respective values of the feature vector. The method further comprises learning parameters of a machine-learning model based on the observed data points. The machine-learning model comprises one or more statistical models arranged to model a causal relationship between the feature vector and a latent vector, a classification, and a manipulation vector. The manipulation vector represents an effect of potential manipulations occurring between the ground truth and the observation thereof as observed via the feature vector. The learning comprises learning parameters of the one or more statistical models to map between the feature vector, latent vector, classification and manipulation vector.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: August 1, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Cheng Zhang, Yingzhen Li
  • Publication number: 20230111659
    Abstract: An apparatus has a memory storing a reinforcement learning policy with an optimization component and a data collection component. The apparatus has a regularization component which applies regularization selectively between the optimization component of the reinforcement learning policy and the data collection component of the reinforcement learning policy. A processor carries out a reinforcement learning process by: triggering execution of an agent according to the policy and with respect to a first task; observing values of variables comprising: an observation space of the agent, an action of the agent; and updating the policy using reinforcement learning according to the observed values and taking into account the regularization.
    Type: Application
    Filed: November 2, 2022
    Publication date: April 13, 2023
    Inventors: Sam Michael DEVLIN, Maximilian IGL, Kamil Andrzej CIOSEK, Yingzhen LI, Sebastian TSCHIATSCHEK, Cheng ZHANG, Katja HOFMANN
  • Patent number: 11526812
    Abstract: An apparatus has a memory storing a reinforcement learning policy with an optimization component and a data collection component. The apparatus has a regularization component which applies regularization selectively between the optimization component of the reinforcement learning policy and the data collection component of the reinforcement learning policy. A processor carries out a reinforcement learning process by: triggering execution of an agent according to the policy and with respect to a first task; observing values of variables comprising: an observation space of the agent, an action of the agent; and updating the policy using reinforcement learning according to the observed values and taking into account the regularization.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: December 13, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sam Michael Devlin, Maximilian Igl, Kamil Andrzej Ciosek, Yingzhen Li, Sebastian Tschiatschek, Cheng Zhang, Katja Hofmann
  • Publication number: 20220147818
    Abstract: A computer-implemented method of training an auxiliary machine learning model to predict a set of new parameters of a primary machine learning model, wherein the primary model is configured to transform from an observed subset of a set of real-world features to a predicted version of the set of real-world features.
    Type: Application
    Filed: November 11, 2020
    Publication date: May 12, 2022
    Inventors: Cheng ZHANG, Angus LAMB, Evgeny Sergeevich SAVELIEV, Yingzhen LI, Camilla LONGDEN, Pashmina CAMERON, Sebastian TSCHIATSCHEK, Jose Miguel Hernández LOBATO, Richard TURNER
  • Patent number: 11238341
    Abstract: Embodiments include applying neural network technologies to encoding/decoding technologies by training and encoder model and a decoder model using a neural network. Neural network training is used to tune a neural network parameter for the encoder model and a neural network parameter for the decoder model that approximates an objective function. The common objective function may specify a minimized reconstruction error to be achieved by the encoder model and the decoder model when reconstructing (encoding then decoding) training data. The common objective function also specifies for the encoder and decoder models, a variable f representing static aspects of the training data and a set of variables z1:T representing dynamic aspects of the training data. During runtime, the trained encoder and decoder models are implemented by encoder and decoder machines to encode and decoder runtime sequences having a higher compression rate and a lower reconstruction error than in prior approaches.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: February 1, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: Stephan Marcel Mandt, Yingzhen Li
  • Publication number: 20210406765
    Abstract: A computer-implemented method of training a model comprising a sequence of stages, each stage in the sequence comprises: a VAE comprising a respective first encoder arranged to encode a respective subset of the real-world features into a respective latent space representation, and a respective first decoder arranged to decode from the respective latent space representation to a respective decoded version of the respective set of real-world features; at least each but the last stage in the sequence comprises: a respective second decoder arranged to decode from the respective latent space representation to predict one or more respective actions; and each successive stage in the sequence following the first stage, each succeeding a respective preceding stage in the sequence, further comprises: a sequential network arranged to transform from the latent representation from the preceding stage to the latent space representation of the successive stage.
    Type: Application
    Filed: August 25, 2020
    Publication date: December 30, 2021
    Inventors: Cheng ZHANG, Yingzhen LI, Sebastian TSCHIATSCHEK, Haiyan YIN, Jooyeon KIM
  • Patent number: 11205121
    Abstract: Embodiments include applying neural network technologies to encoding/decoding technologies by training and encoder model and a decoder model using a neural network. Neural network training is used to tune a neural network parameter for the encoder model and a neural network parameter for the decoder model that approximates an objective function. The common objective function may specify a minimized reconstruction error to be achieved by the encoder model and the decoder model when reconstructing (encoding then decoding) training data. The common objective function also specifies for the encoder and decoder models, a variable f representing static aspects of the training data and a set of variables z1:T representing dynamic aspects of the training data. During runtime, the trained encoder and decoder models are implemented by encoder and decoder machines to encode and decoder runtime sequences having a higher compression rate and a lower reconstruction error than in prior approaches.
    Type: Grant
    Filed: June 20, 2018
    Date of Patent: December 21, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Stephan Marcel Mandt, Yingzhen Li
  • Publication number: 20210097445
    Abstract: An apparatus has a memory storing a reinforcement learning policy with an optimization component and a data collection component. The apparatus has a regularization component which applies regularization selectively between the optimization component of the reinforcement learning policy and the data collection component of the reinforcement learning policy. A processor carries out a reinforcement learning process by: triggering execution of an agent according to the policy and with respect to a first task; observing values of variables comprising: an observation space of the agent, an action of the agent; and updating the policy using reinforcement learning according to the observed values and taking into account the regularization.
    Type: Application
    Filed: October 1, 2019
    Publication date: April 1, 2021
    Inventors: Sam Michael DEVLIN, Maximilian IGL, Kamil Andrzej CIOSEK, Yingzhen LI, Sebastian TSCHIATSCHEK, Cheng ZHANG, Katja HOFMANN
  • Publication number: 20200394512
    Abstract: A method comprising: receiving observed data points each comprising a vector of feature values, wherein for each data point, the respective feature values are values of different features of a feature vector. Each observed data point represents a respective observation of a ground truth as observed in the form of the respective values of the feature vector. The method further comprises learning parameters of a machine-learning model based on the observed data points. The machine-learning model comprises one or more statistical models arranged to model a causal relationship between the feature vector and a latent vector, a classification, and a manipulation vector. The manipulation vector represents an effect of potential manipulations occurring between the ground truth and the observation thereof as observed via the feature vector. The learning comprises learning parameters of the one or more statistical models to map between the feature vector, latent vector, classification and manipulation vector.
    Type: Application
    Filed: July 10, 2019
    Publication date: December 17, 2020
    Inventors: Cheng ZHANG, Yingzhen LI
  • Publication number: 20200349441
    Abstract: A method of operating a neural network, comprising: at each input node of an input layer, weighting a respective input element received by that node by applying a first class of probability distribution, thereby generating a respective set of output parameters describing an output probability distribution; and from each input node, outputting the respective set of output parameters to one or more nodes in a next, hidden layer of the network, thereby propagating the respective set of output parameters through the hidden layers to an output layer; the propagating comprising, at one or more nodes of at least one hidden layer, combining the sets of input parameters and weighting the combination by applying a second class of probability distribution, thereby generating a respective set of output parameters describing an output probability distribution, wherein the first class of probability distribution is more sparsity inducing than the second class of probability distribution.
    Type: Application
    Filed: July 1, 2019
    Publication date: November 5, 2020
    Inventors: Cheng ZHANG, Yordan KIRILOV ZAYKOV, Yingzhen LI, Jose Miguel HERNANDEZ LOBATO, Anna-Lena POPKES, Hiske Catharina OVERWEG
  • Publication number: 20190393903
    Abstract: Embodiments include applying neural network technologies to encoding/decoding technologies by training and encoder model and a decoder model using a neural network. Neural network training is used to tune a neural network parameter for the encoder model and a neural network parameter for the decoder model that approximates an objective function. The common objective function may specify a minimized reconstruction error to be achieved by the encoder model and the decoder model when reconstructing (encoding then decoding) training data. The common objective function also specifies for the encoder and decoder models, a variable f representing static aspects of the training data and a set of variables z1:T representing dynamic aspects of the training data. During runtime, the trained encoder and decoder models are implemented by encoder and decoder machines to encode and decoder runtime sequences having a higher compression rate and a lower reconstruction error than in prior approaches.
    Type: Application
    Filed: June 29, 2018
    Publication date: December 26, 2019
    Inventors: Stephan Marcel MANDT, Yingzhen LI
  • Publication number: 20190392302
    Abstract: Embodiments include applying neural network technologies to encoding/decoding technologies by training and encoder model and a decoder model using a neural network. Neural network training is used to tune a neural network parameter for the encoder model and a neural network parameter for the decoder model that approximates an objective function. The common objective function may specify a minimized reconstruction error to be achieved by the encoder model and the decoder model when reconstructing (encoding then decoding) training data. The common objective function also specifies for the encoder and decoder models, a variable f representing static aspects of the training data and a set of variables z1:T representing dynamic aspects of the training data. During runtime, the trained encoder and decoder models are implemented by encoder and decoder machines to encode and decoder runtime sequences having a higher compression rate and a lower reconstruction error than in prior approaches.
    Type: Application
    Filed: June 20, 2018
    Publication date: December 26, 2019
    Inventors: Stephan Marcel MANDT, Yingzhen LI