Patents by Inventor Harold Chaput

Harold Chaput has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12296265
    Abstract: This specification describes a computer-implemented method of generating context-dependent speech audio in a video game. The method comprises obtaining contextual information relating to a state of the video game. The contextual information is inputted into a prosody prediction module. The prosody prediction module comprises a trained machine learning model which is configured to generate predicted prosodic features based on the contextual information. Input data comprising the predicted prosodic features and speech content data associated with the state of the video game is inputted into a speech audio generation module. An encoded representation of the speech content data dependent on the predicted prosodic features is generated using one or more encoders of the speech audio generation module. Context-dependent speech audio is generated, based on the encoded representation, using a decoder of the speech audio generation module.
    Type: Grant
    Filed: January 9, 2024
    Date of Patent: May 13, 2025
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Kilol Gupta, Zahra Shakeri, Gordon Durity, Mohsen Sardari, Harold Chaput, Navid Aghdaie
  • Patent number: 12293466
    Abstract: A method, computer-readable storage medium, and device for generating a master representation of input models. The method comprises: receiving a first base mesh and a second base mesh, wherein the first base mesh has a first topology and is associated with a first set of blendshapes to deform the first base mesh, the second base mesh has a second topology and is associated with a second set of blendshapes to deform the second base mesh, and the second topology is different from the first topology; combining the first topology and the second topology into a combined mesh topology representation; combining the first set of blendshapes and the second set of blendshapes into a combined blendshape representation; and outputting the combined mesh topology representation and the combined blendshape representation as a master representation, wherein the master representation can be queried with a target topology and blendshape.
    Type: Grant
    Filed: March 24, 2023
    Date of Patent: May 6, 2025
    Assignee: Electronic Arts Inc.
    Inventors: Igor Borovikov, David Auclair, Mihai Anghelescu, Harold Chaput
  • Publication number: 20240320920
    Abstract: A method, computer-readable storage medium, and device for generating a master representation of input models. The method comprises: receiving a first base mesh and a second base mesh, wherein the first base mesh has a first topology and is associated with a first set of blendshapes to deform the first base mesh, the second base mesh has a second topology and is associated with a second set of blendshapes to deform the second base mesh, and the second topology is different from the first topology; combining the first topology and the second topology into a combined mesh topology representation; combining the first set of blendshapes and the second set of blendshapes into a combined blendshape representation; and outputting the combined mesh topology representation and the combined blendshape representation as a master representation, wherein the master representation can be queried with a target topology and blendshape.
    Type: Application
    Filed: March 24, 2023
    Publication date: September 26, 2024
    Inventors: Igor BOROVIKOV, David AUCLAIR, Mihai ANGHELESCU, Harold CHAPUT
  • Patent number: 11790884
    Abstract: A computer-implemented method of generating speech audio in a video game is provided. The method includes inputting, into a synthesizer module, input data that represents speech content. Source acoustic features for the speech content in the voice of a source speaker are generated and are input, along with a speaker embedding associated with a player of the video game into an acoustic feature encoder of a voice convertor. One or more acoustic feature encodings are generated as output of the acoustic feature encoder, which are inputted into an acoustic feature decoder of the voice convertor to generate target acoustic features. The target acoustic features are processed with one or more modules, to generate speech audio in the voice of the player.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: October 17, 2023
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Zahra Shakeri, Jervis Pinto, Kilol Gupta, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kenneth Moss
  • Patent number: 11668581
    Abstract: This specification describes a system for generating positions of map items such as buildings, for placement on a virtual map. The system comprises: at least one processor; and a non-transitory computer-readable medium including executable instructions that when executed by the at least one processor cause the at least one processor to perform at least the following operations: receiving an input at a generator neural network trained for generating map item positions; generating, with the generator neural network, a probability of placing a map item for each subregion of a plurality of subregions of the region of the virtual map; and generating position data of map items for placement on the virtual map using the probability for each subregion.
    Type: Grant
    Filed: September 2, 2022
    Date of Patent: June 6, 2023
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Han Liu, Yiwei Zhao, Jingwen Liang, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Patent number: 11648477
    Abstract: A method, computer-readable storage medium, and device for generating a character model. The method comprises: receiving an input image of a reference subject; processing the input image to generate a normalized image; identifying a set of features present in the normalized image, wherein each feature in the set of features corresponds to a portion of a head or body of the reference subject; for each feature in the set of features, processing at least a portion of the normalized image including the feature by a neural network model corresponding to the feature to generate a parameter vector corresponding to the feature; and combining the parameter vectors output by respective neural network models corresponding to respective features in the set of features to generate a parameterized character model corresponding to reference subject in the input image.
    Type: Grant
    Filed: May 13, 2022
    Date of Patent: May 16, 2023
    Assignee: Electronic Arts Inc.
    Inventors: Igor Borovikov, Pawel Piotr Wrotek, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Patent number: 11605388
    Abstract: This specification describes a computer-implemented method of generating speech audio for use in a video game, wherein the speech audio is generated using a voice convertor that has been trained to convert audio data for a source speaker into audio data for a target speaker. The method comprises receiving: (i) source speech audio, and (ii) a target speaker identifier. The source speech audio comprises speech content in the voice of a source speaker. Source acoustic features are determined for the source speech audio. A target speaker embedding associated with the target speaker identifier is generated as output of a speaker encoder of the voice convertor. The target speaker embedding and the source acoustic features are inputted into an acoustic feature encoder of the voice convertor. One or more acoustic feature encodings are generated as output of the acoustic feature encoder. The one or more acoustic feature encodings are derived from the target speaker embedding and the source acoustic features.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: March 14, 2023
    Assignee: Electronic Arts Inc.
    Inventors: Kilol Gupta, Dhaval Shah, Zahra Shakeri, Jervis Pinto, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Publication number: 20220412765
    Abstract: This specification describes a system for generating positions of map items such as buildings, for placement on a virtual map. The system comprises: at least one processor; and a non-transitory computer-readable medium including executable instructions that when executed by the at least one processor cause the at least one processor to perform at least the following operations: receiving an input at a generator neural network trained for generating map item positions; generating, with the generator neural network, a probability of placing a map item for each subregion of a plurality of subregions of the region of the virtual map; and generating position data of map items for placement on the virtual map using the probability for each subregion.
    Type: Application
    Filed: September 2, 2022
    Publication date: December 29, 2022
    Inventors: Han Liu, Yiwei Zhao, Jingwen Liang, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Patent number: 11534690
    Abstract: According to a first aspect of this specification, there is disclosed a computer implemented method comprising: training, based on an initial behavior goal and using reinforcement-learning, a reinforcement-learning model for controlling behavior of a non-playable character in a computer game environment; converting the trained reinforcement-learning model into a behavior tree model for controlling behavior of the non-playable character; editing, based on a user input, the behavior tree model to generate an updated behavior tree model for controlling behavior of the non-playable character; and outputting a final model for controlling non-player character behavior for use in the computer game environment, wherein the model for controlling non-player character behavior is based at least in part on the updated behavior tree model.
    Type: Grant
    Filed: August 21, 2020
    Date of Patent: December 27, 2022
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Meng Wu, Harold Chaput, Navid Aghdaie, Kazi Zaman, Yunqi Zhao, Qilian Yu
  • Patent number: 11473927
    Abstract: This specification describes a system for generating positions of map items such as buildings, for placement on a virtual map. The system comprises: at least one processor; and a non-transitory computer-readable medium including executable instructions that when executed by the at least one processor cause the at least one processor to perform at least the following operations: receiving an input at a generator neural network trained for generating map item positions; generating, with the generator neural network, a probability of placing a map item for each subregion of a plurality of subregions of the region of the virtual map; and generating position data of map items for placement on the virtual map using the probability for each subregion.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: October 18, 2022
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Han Liu, Yiwei Zhao, Jingwen Liang, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Publication number: 20220270324
    Abstract: A method, computer-readable storage medium, and device for generating a character model. The method comprises: receiving an input image of a reference subject; processing the input image to generate a normalized image; identifying a set of features present in the normalized image, wherein each feature in the set of features corresponds to a portion of a head or body of the reference subject; for each feature in the set of features, processing at least a portion of the normalized image including the feature by a neural network model corresponding to the feature to generate a parameter vector corresponding to the feature; and combining the parameter vectors output by respective neural network models corresponding to respective features in the set of features to generate a parameterized character model corresponding to reference subject in the input image.
    Type: Application
    Filed: May 13, 2022
    Publication date: August 25, 2022
    Inventors: Igor Borovikov, Pawel Piotr Wrotek, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Patent number: 11413541
    Abstract: According to an aspect of this specification, there is described a computer implemented method comprising: receiving input data, the input data comprising data relating to a user of a computer game; generating, based on the input data, one or more candidate challenges for the computer game; determining, using a machine-learned model, whether each of the one or more of the candidate challenges satisfies a threshold condition, wherein the threshold condition is based on a target challenge difficultly; in response to a positive determination, outputting the one or more candidate challenges that satisfy the threshold condition for use in the computer game by the user.
    Type: Grant
    Filed: June 3, 2020
    Date of Patent: August 16, 2022
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Jesse Harder, Harold Chaput, Mohsen Sardari, Navid Aghdaie, Kazi Zaman
  • Patent number: 11367254
    Abstract: A method, computer-readable storage medium, and device for generating a character model. The method comprises: receiving an input image of a reference subject; processing the input image to generate a normalized image; identifying a set of features present in the normalized image, wherein each feature in the set of features corresponds to a portion of a head or body of the reference subject; for each feature in the set of features, processing at least a portion of the normalized image including the feature by a neural network model corresponding to the feature to generate a parameter vector corresponding to the feature; and combining the parameter vectors output by respective neural network models corresponding to respective features in the set of features to generate a parameterized character model corresponding to reference subject in the input image.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: June 21, 2022
    Assignee: Electronic Arts Inc.
    Inventors: Igor Borovikov, Pawel Piotr Wrotek, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Publication number: 20220054943
    Abstract: According to a first aspect of this specification, there is disclosed a computer implemented method comprising: training, based on an initial behavior goal and using reinforcement-learning, a reinforcement-learning model for controlling behavior of a non-playable character in a computer game environment; converting the trained reinforcement-learning model into a behavior tree model for controlling behavior of the non-playable character; editing, based on a user input, the behavior tree model to generate an updated behavior tree model for controlling behavior of the non-playable character; and outputting a final model for controlling non-player character behavior for use in the computer game environment, wherein the model for controlling non-player character behavior is based at least in part on the updated behavior tree model.
    Type: Application
    Filed: August 21, 2020
    Publication date: February 24, 2022
    Inventors: Meng Wu, Harold Chaput, Navid Aghdaie, Kazi Zaman, Yunqi Zhao, Qilian Yu
  • Publication number: 20220012244
    Abstract: A videogame metrics query system, and related method, has one or more databases and a speculative cache. The system stores videogame metrics and tracks queries relating to videogame metrics. The system generates multiple queries, based on a received query and tracked queries. The system generates a combined query that has greater computational efficiency of execution. From executing the combined query, the system extracts query results relevant to the received query, and caches remaining results in the speculative cache.
    Type: Application
    Filed: July 9, 2020
    Publication date: January 13, 2022
    Applicant: ELECTRONIC ARTS INC.
    Inventors: Serena Wang, Kaiyu Liu, Yu Jin, Sundeep Narravula, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Patent number: 11217001
    Abstract: A method, computer-readable storage medium, and device for generating an animation sequence are disclosed. The method comprises: receiving an input animation sequence, wherein the input animation sequence comprises character position information over a series of frames and a first style tag; executing an encoder to process the input animation sequence to generate a compressed representation of the input animation sequence, wherein the compressed representation of the input animation sequence comprises a vector representing the input animation sequence; and executing a decoder to generate an output animation sequence, wherein executing the decoder is based on the compressed representation of the input animation sequence, wherein the output animation sequence comprises character position information over a series of frames, and wherein the output animation sequence is based on the input animation sequence and comprises a second style tag.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: January 4, 2022
    Assignee: Electronic Arts Inc.
    Inventors: Yiwei Zhao, Igor Borovikov, Maziar Sanjabi, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Publication number: 20210379493
    Abstract: According to an aspect of this specification, there is described a computer implemented method comprising: receiving input data, the input data comprising data relating to a user of a computer game; generating, based on the input data, one or more candidate challenges for the computer game; determining, using a machine-learned model, whether each of the one or more of the candidate challenges satisfies a threshold condition, wherein the threshold condition is based on a target challenge difficultly; in response to a positive determination, outputting the one or more candidate challenges that satisfy the threshold condition for use in the computer game by the user.
    Type: Application
    Filed: June 3, 2020
    Publication date: December 9, 2021
    Inventors: Jesse Harder, Harold Chaput, Mohsen Sardari, Navid Aghdaie, Kazi Zaman
  • Publication number: 20210383585
    Abstract: A method, computer-readable storage medium, and device for generating an animation sequence are disclosed. The method comprises: receiving an input animation sequence, wherein the input animation sequence comprises character position information over a series of frames and a first style tag; executing an encoder to process the input animation sequence to generate a compressed representation of the input animation sequence, wherein the compressed representation of the input animation sequence comprises a vector representing the input animation sequence; and executing a decoder to generate an output animation sequence, wherein executing the decoder is based on the compressed representation of the input animation sequence, wherein the output animation sequence comprises character position information over a series of frames, and wherein the output animation sequence is based on the input animation sequence and comprises a second style tag.
    Type: Application
    Filed: June 9, 2020
    Publication date: December 9, 2021
    Inventors: Yiwei Zhao, Igor Borovikov, Maziar Sanjabi, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Publication number: 20210327135
    Abstract: A method, computer-readable storage medium, and device for generating a character model. The method comprises: receiving an input image of a reference subject; processing the input image to generate a normalized image; identifying a set of features present in the normalized image, wherein each feature in the set of features corresponds to a portion of a head or body of the reference subject; for each feature in the set of features, processing at least a portion of the normalized image including the feature by a neural network model corresponding to the feature to generate a parameter vector corresponding to the feature; and combining the parameter vectors output by respective neural network models corresponding to respective features in the set of features to generate a parameterized character model corresponding to reference subject in the input image.
    Type: Application
    Filed: April 21, 2020
    Publication date: October 21, 2021
    Inventors: Igor Borovikov, Pawel Piotr Wrotek, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Publication number: 20210239490
    Abstract: This specification describes a system for generating positions of map items such as buildings, for placement on a virtual map. The system comprises: at least one processor; and a non-transitory computer-readable medium including executable instructions that when executed by the at least one processor cause the at least one processor to perform at least the following operations: receiving an input at a generator neural network trained for generating map item positions; generating, with the generator neural network, a probability of placing a map item for each subregion of a plurality of subregions of the region of the virtual map; and generating position data of map items for placement on the virtual map using the probability for each subregion.
    Type: Application
    Filed: May 28, 2020
    Publication date: August 5, 2021
    Inventors: Han Liu, Yiwei Zhao, Jingwen Liang, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman