Patents by Inventor Kristoffer Sjöö

Kristoffer Sjöö has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11847727
    Abstract: A computer-implemented method for generating a machine-learned model to generate facial position data based on audio data comprising training a conditional variational autoencoder having an encoder and decoder. The training comprises receiving a set of training data items, each training data item comprising a facial position descriptor and an audio descriptor; processing one or more of the training data items using the encoder to obtain distribution parameters; sampling a latent vector from a latent space distribution based on the distribution parameters; processing the latent vector and the audio descriptor using the decoder to obtain a facial position output; calculating a loss value based at least in part on a comparison of the facial position output and the facial position descriptor of at least one of the one or more training data items; and updating parameters of the conditional variational autoencoder based at least in part on the calculated loss value.
    Type: Grant
    Filed: December 21, 2022
    Date of Patent: December 19, 2023
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Jorge del Val Santos, Linus Gisslen, Martin Singh-Blom, Kristoffer Sjöö, Mattias Teye
  • Patent number: 11724201
    Abstract: Various aspects of the subject technology relate to systems, methods, and machine-readable media for generating insights for video games. The method includes gathering information regarding a player for a plurality of video games, the information comprising at least one of in-world state data, player action data, player progression data, and/or real-world events relevant to each video game. The method also includes tracking events in at least one video game of the plurality of video games, the events comprising an action event or a standby event. The method also includes determining that an event of the tracked events is an action event. The method also includes generating insights regarding the action event based on the information gathered regarding the player, the insights for improving the player's performance in the video game. The method also includes relaying the insights to the player to improve the player's performance in the video game.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: August 15, 2023
    Assignee: Electronic Arts Inc.
    Inventors: Harold Henry Chaput, Mattias Teye, Zebin Chen, Wei Wang, Ulf Erik Kristoffer Sjöö, Ulf Martin Lucas Singh-Blom
  • Publication number: 20230123486
    Abstract: A computer-implemented method for generating a machine-learned model to generate facial position data based on audio data comprising training a conditional variational autoencoder having an encoder and decoder. The training comprises receiving a set of training data items, each training data item comprising a facial position descriptor and an audio descriptor; processing one or more of the training data items using the encoder to obtain distribution parameters; sampling a latent vector from a latent space distribution based on the distribution parameters; processing the latent vector and the audio descriptor using the decoder to obtain a facial position output; calculating a loss value based at least in part on a comparison of the facial position output and the facial position descriptor of at least one of the one or more training data items; and updating parameters of the conditional variational autoencoder based at least in part on the calculated loss value.
    Type: Application
    Filed: December 21, 2022
    Publication date: April 20, 2023
    Inventors: Jorge del Val Santos, Linus Gisslen, Martin Singh-Blom, Kristoffer Sjöö, Mattias Teye
  • Patent number: 11562521
    Abstract: A computer-implemented method for generating a machine-learned model to generate facial position data based on audio data comprising training a conditional variational autoencoder having an encoder and decoder. The training comprises receiving a set of training data items, each training data item comprising a facial position descriptor and an audio descriptor; processing one or more of the training data items using the encoder to obtain distribution parameters; sampling a latent vector from a latent space distribution based on the distribution parameters; processing the latent vector and the audio descriptor using the decoder to obtain a facial position output; calculating a loss value based at least in part on a comparison of the facial position output and the facial position descriptor of at least one of the one or more training data items; and updating parameters of the conditional variational autoencoder based at least in part on the calculated loss value.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: January 24, 2023
    Assignee: Electronic Arts Inc.
    Inventors: Jorge del Val Santos, Linus Gisslén, Martin Singh-Blom, Kristoffer Sjöö, Mattias Teye
  • Publication number: 20210319610
    Abstract: A computer-implemented method for generating a machine-learned model to generate facial position data based on audio data comprising training a conditional variational autoencoder having an encoder and decoder. The training comprises receiving a set of training data items, each training data item comprising a facial position descriptor and an audio descriptor; processing one or more of the training data items using the encoder to obtain distribution parameters; sampling a latent vector from a latent space distribution based on the distribution parameters; processing the latent vector and the audio descriptor using the decoder to obtain a facial position output; calculating a loss value based at least in part on a comparison of the facial position output and the facial position descriptor of at least one of the one or more training data items; and updating parameters of the conditional variational autoencoder based at least in part on the calculated loss value.
    Type: Application
    Filed: June 22, 2021
    Publication date: October 14, 2021
    Inventors: Jorge del Val Santos, Linus Gisslén, Martin Singh-Blom, Kristoffer Sjöö, Mattias Teye
  • Patent number: 11049308
    Abstract: A computer-implemented method for generating a machine-learned model to generate facial position data based on audio data comprising training a conditional variational autoencoder having an encoder and decoder. The training comprises receiving a set of training data items, each training data item comprising a facial position descriptor and an audio descriptor; processing one or more of the training data items using the encoder to obtain distribution parameters; sampling a latent vector from a latent space distribution based on the distribution parameters; processing the latent vector and the audio descriptor using the decoder to obtain a facial position output; calculating a loss value based at least in part on a comparison of the facial position output and the facial position descriptor of at least one of the one or more training data items; and updating parameters of the conditional variational autoencoder based at least in part on the calculated loss value.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: June 29, 2021
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Jorge del Val Santos, Linus Gisslén, Martin Singh-Blom, Kristoffer Sjöö, Mattias Teye
  • Publication number: 20200302667
    Abstract: A computer-implemented method for generating a machine-learned model to generate facial position data based on audio data comprising training a conditional variational autoencoder having an encoder and decoder. The training comprises receiving a set of training data items, each training data item comprising a facial position descriptor and an audio descriptor; processing one or more of the training data items using the encoder to obtain distribution parameters; sampling a latent vector from a latent space distribution based on the distribution parameters; processing the latent vector and the audio descriptor using the decoder to obtain a facial position output; calculating a loss value based at least in part on a comparison of the facial position output and the facial position descriptor of at least one of the one or more training data items; and updating parameters of the conditional variational autoencoder based at least in part on the calculated loss value.
    Type: Application
    Filed: April 25, 2019
    Publication date: September 24, 2020
    Inventors: Jorge del Val Santos, Linus Gisslén, Martin Singh-Blom, Kristoffer Sjöö, Mattias Teye