Patents by Inventor Kristoffer Sjöö
Kristoffer Sjöö has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11847727Abstract: A computer-implemented method for generating a machine-learned model to generate facial position data based on audio data comprising training a conditional variational autoencoder having an encoder and decoder. The training comprises receiving a set of training data items, each training data item comprising a facial position descriptor and an audio descriptor; processing one or more of the training data items using the encoder to obtain distribution parameters; sampling a latent vector from a latent space distribution based on the distribution parameters; processing the latent vector and the audio descriptor using the decoder to obtain a facial position output; calculating a loss value based at least in part on a comparison of the facial position output and the facial position descriptor of at least one of the one or more training data items; and updating parameters of the conditional variational autoencoder based at least in part on the calculated loss value.Type: GrantFiled: December 21, 2022Date of Patent: December 19, 2023Assignee: ELECTRONIC ARTS INC.Inventors: Jorge del Val Santos, Linus Gisslen, Martin Singh-Blom, Kristoffer Sjöö, Mattias Teye
-
Patent number: 11724201Abstract: Various aspects of the subject technology relate to systems, methods, and machine-readable media for generating insights for video games. The method includes gathering information regarding a player for a plurality of video games, the information comprising at least one of in-world state data, player action data, player progression data, and/or real-world events relevant to each video game. The method also includes tracking events in at least one video game of the plurality of video games, the events comprising an action event or a standby event. The method also includes determining that an event of the tracked events is an action event. The method also includes generating insights regarding the action event based on the information gathered regarding the player, the insights for improving the player's performance in the video game. The method also includes relaying the insights to the player to improve the player's performance in the video game.Type: GrantFiled: December 11, 2020Date of Patent: August 15, 2023Assignee: Electronic Arts Inc.Inventors: Harold Henry Chaput, Mattias Teye, Zebin Chen, Wei Wang, Ulf Erik Kristoffer Sjöö, Ulf Martin Lucas Singh-Blom
-
Publication number: 20230123486Abstract: A computer-implemented method for generating a machine-learned model to generate facial position data based on audio data comprising training a conditional variational autoencoder having an encoder and decoder. The training comprises receiving a set of training data items, each training data item comprising a facial position descriptor and an audio descriptor; processing one or more of the training data items using the encoder to obtain distribution parameters; sampling a latent vector from a latent space distribution based on the distribution parameters; processing the latent vector and the audio descriptor using the decoder to obtain a facial position output; calculating a loss value based at least in part on a comparison of the facial position output and the facial position descriptor of at least one of the one or more training data items; and updating parameters of the conditional variational autoencoder based at least in part on the calculated loss value.Type: ApplicationFiled: December 21, 2022Publication date: April 20, 2023Inventors: Jorge del Val Santos, Linus Gisslen, Martin Singh-Blom, Kristoffer Sjöö, Mattias Teye
-
Patent number: 11562521Abstract: A computer-implemented method for generating a machine-learned model to generate facial position data based on audio data comprising training a conditional variational autoencoder having an encoder and decoder. The training comprises receiving a set of training data items, each training data item comprising a facial position descriptor and an audio descriptor; processing one or more of the training data items using the encoder to obtain distribution parameters; sampling a latent vector from a latent space distribution based on the distribution parameters; processing the latent vector and the audio descriptor using the decoder to obtain a facial position output; calculating a loss value based at least in part on a comparison of the facial position output and the facial position descriptor of at least one of the one or more training data items; and updating parameters of the conditional variational autoencoder based at least in part on the calculated loss value.Type: GrantFiled: June 22, 2021Date of Patent: January 24, 2023Assignee: Electronic Arts Inc.Inventors: Jorge del Val Santos, Linus Gisslén, Martin Singh-Blom, Kristoffer Sjöö, Mattias Teye
-
Publication number: 20210319610Abstract: A computer-implemented method for generating a machine-learned model to generate facial position data based on audio data comprising training a conditional variational autoencoder having an encoder and decoder. The training comprises receiving a set of training data items, each training data item comprising a facial position descriptor and an audio descriptor; processing one or more of the training data items using the encoder to obtain distribution parameters; sampling a latent vector from a latent space distribution based on the distribution parameters; processing the latent vector and the audio descriptor using the decoder to obtain a facial position output; calculating a loss value based at least in part on a comparison of the facial position output and the facial position descriptor of at least one of the one or more training data items; and updating parameters of the conditional variational autoencoder based at least in part on the calculated loss value.Type: ApplicationFiled: June 22, 2021Publication date: October 14, 2021Inventors: Jorge del Val Santos, Linus Gisslén, Martin Singh-Blom, Kristoffer Sjöö, Mattias Teye
-
Patent number: 11049308Abstract: A computer-implemented method for generating a machine-learned model to generate facial position data based on audio data comprising training a conditional variational autoencoder having an encoder and decoder. The training comprises receiving a set of training data items, each training data item comprising a facial position descriptor and an audio descriptor; processing one or more of the training data items using the encoder to obtain distribution parameters; sampling a latent vector from a latent space distribution based on the distribution parameters; processing the latent vector and the audio descriptor using the decoder to obtain a facial position output; calculating a loss value based at least in part on a comparison of the facial position output and the facial position descriptor of at least one of the one or more training data items; and updating parameters of the conditional variational autoencoder based at least in part on the calculated loss value.Type: GrantFiled: April 25, 2019Date of Patent: June 29, 2021Assignee: ELECTRONIC ARTS INC.Inventors: Jorge del Val Santos, Linus Gisslén, Martin Singh-Blom, Kristoffer Sjöö, Mattias Teye
-
Publication number: 20200302667Abstract: A computer-implemented method for generating a machine-learned model to generate facial position data based on audio data comprising training a conditional variational autoencoder having an encoder and decoder. The training comprises receiving a set of training data items, each training data item comprising a facial position descriptor and an audio descriptor; processing one or more of the training data items using the encoder to obtain distribution parameters; sampling a latent vector from a latent space distribution based on the distribution parameters; processing the latent vector and the audio descriptor using the decoder to obtain a facial position output; calculating a loss value based at least in part on a comparison of the facial position output and the facial position descriptor of at least one of the one or more training data items; and updating parameters of the conditional variational autoencoder based at least in part on the calculated loss value.Type: ApplicationFiled: April 25, 2019Publication date: September 24, 2020Inventors: Jorge del Val Santos, Linus Gisslén, Martin Singh-Blom, Kristoffer Sjöö, Mattias Teye