Patents by Inventor Kazi A. Zaman
Kazi A. Zaman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240290316Abstract: A system for use in video game development to generate expressive speech audio comprises a user interface configured to receive user-input text data and a user selection of a speech style. The system includes a machine-learned synthesizer comprising a text encoder, a speech style encoder and a decoder. The machine-learned synthesizer is configured to generate one or more text encodings derived from the user-input text data, using the text encoder of the machine-learned synthesizer; generate a speech style encoding by processing a set of speech style features associated with the selected speech style using the speech style encoder of the machine-learned synthesizer; combine the one or more text encodings and the speech style encoding to generate one or more combined encodings; and decode the one or more combined encodings with the decoder of the machine-learned synthesizer to generate predicted acoustic features.Type: ApplicationFiled: May 7, 2024Publication date: August 29, 2024Inventors: Siddharth Gururani, Kilol Gupta, Dhaval Shah, Zahra Shakeri, Jervis Pinto, Mohsen Sardari, Navid Aghdaie, Kazi Zaman
-
Patent number: 12033611Abstract: A system for use in video game development to generate expressive speech audio comprises a user interface configured to receive user-input text data and a user selection of a speech style. The system includes a machine-learned synthesizer comprising a text encoder, a speech style encoder and a decoder. The machine-learned synthesizer is configured to generate one or more text encodings derived from the user-input text data, using the text encoder of the machine-learned synthesizer; generate a speech style encoding by processing a set of speech style features associated with the selected speech style using the speech style encoder of the machine-learned synthesizer; combine the one or more text encodings and the speech style encoding to generate one or more combined encodings; and decode the one or more combined encodings with the decoder of the machine-learned synthesizer to generate predicted acoustic features.Type: GrantFiled: February 28, 2022Date of Patent: July 9, 2024Assignee: ELECTRONIC ARTS INC.Inventors: Siddharth Gururani, Kilol Gupta, Dhaval Shah, Zahra Shakeri, Jervis Pinto, Mohsen Sardari, Navid Aghdaie, Kazi Zaman
-
Patent number: 11668581Abstract: This specification describes a system for generating positions of map items such as buildings, for placement on a virtual map. The system comprises: at least one processor; and a non-transitory computer-readable medium including executable instructions that when executed by the at least one processor cause the at least one processor to perform at least the following operations: receiving an input at a generator neural network trained for generating map item positions; generating, with the generator neural network, a probability of placing a map item for each subregion of a plurality of subregions of the region of the virtual map; and generating position data of map items for placement on the virtual map using the probability for each subregion.Type: GrantFiled: September 2, 2022Date of Patent: June 6, 2023Assignee: ELECTRONIC ARTS INC.Inventors: Han Liu, Yiwei Zhao, Jingwen Liang, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
-
Patent number: 11648477Abstract: A method, computer-readable storage medium, and device for generating a character model. The method comprises: receiving an input image of a reference subject; processing the input image to generate a normalized image; identifying a set of features present in the normalized image, wherein each feature in the set of features corresponds to a portion of a head or body of the reference subject; for each feature in the set of features, processing at least a portion of the normalized image including the feature by a neural network model corresponding to the feature to generate a parameter vector corresponding to the feature; and combining the parameter vectors output by respective neural network models corresponding to respective features in the set of features to generate a parameterized character model corresponding to reference subject in the input image.Type: GrantFiled: May 13, 2022Date of Patent: May 16, 2023Assignee: Electronic Arts Inc.Inventors: Igor Borovikov, Pawel Piotr Wrotek, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
-
Patent number: 11605388Abstract: This specification describes a computer-implemented method of generating speech audio for use in a video game, wherein the speech audio is generated using a voice convertor that has been trained to convert audio data for a source speaker into audio data for a target speaker. The method comprises receiving: (i) source speech audio, and (ii) a target speaker identifier. The source speech audio comprises speech content in the voice of a source speaker. Source acoustic features are determined for the source speech audio. A target speaker embedding associated with the target speaker identifier is generated as output of a speaker encoder of the voice convertor. The target speaker embedding and the source acoustic features are inputted into an acoustic feature encoder of the voice convertor. One or more acoustic feature encodings are generated as output of the acoustic feature encoder. The one or more acoustic feature encodings are derived from the target speaker embedding and the source acoustic features.Type: GrantFiled: November 9, 2020Date of Patent: March 14, 2023Assignee: Electronic Arts Inc.Inventors: Kilol Gupta, Dhaval Shah, Zahra Shakeri, Jervis Pinto, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
-
Method and system for automatic and interactive model training using domain knowledge in video games
Patent number: 11565185Abstract: A computer-implemented method is provided of allowing a user to automatically transform domain knowledge into a machine learning model to be used in real-time operation of video games. The method comprises providing a user interface which allows a user to define domain knowledge relating to a video game by specifying one or more labeling functions; transforming the labeling functions into executable code; labeling raw data relating to the video game using the executable code to obtain labeled data; and applying an automated machine learning module to the labeled data to obtain a machine learning model.Type: GrantFiled: March 31, 2020Date of Patent: January 31, 2023Assignee: ELECTRONIC ARTS INC.Inventors: Reza Pourabolghasem, Meredith Trotter, Sundeep Narravula, Navid Aghdaie, Kazi Zaman -
Publication number: 20220412765Abstract: This specification describes a system for generating positions of map items such as buildings, for placement on a virtual map. The system comprises: at least one processor; and a non-transitory computer-readable medium including executable instructions that when executed by the at least one processor cause the at least one processor to perform at least the following operations: receiving an input at a generator neural network trained for generating map item positions; generating, with the generator neural network, a probability of placing a map item for each subregion of a plurality of subregions of the region of the virtual map; and generating position data of map items for placement on the virtual map using the probability for each subregion.Type: ApplicationFiled: September 2, 2022Publication date: December 29, 2022Inventors: Han Liu, Yiwei Zhao, Jingwen Liang, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
-
Patent number: 11534690Abstract: According to a first aspect of this specification, there is disclosed a computer implemented method comprising: training, based on an initial behavior goal and using reinforcement-learning, a reinforcement-learning model for controlling behavior of a non-playable character in a computer game environment; converting the trained reinforcement-learning model into a behavior tree model for controlling behavior of the non-playable character; editing, based on a user input, the behavior tree model to generate an updated behavior tree model for controlling behavior of the non-playable character; and outputting a final model for controlling non-player character behavior for use in the computer game environment, wherein the model for controlling non-player character behavior is based at least in part on the updated behavior tree model.Type: GrantFiled: August 21, 2020Date of Patent: December 27, 2022Assignee: ELECTRONIC ARTS INC.Inventors: Meng Wu, Harold Chaput, Navid Aghdaie, Kazi Zaman, Yunqi Zhao, Qilian Yu
-
Patent number: 11473927Abstract: This specification describes a system for generating positions of map items such as buildings, for placement on a virtual map. The system comprises: at least one processor; and a non-transitory computer-readable medium including executable instructions that when executed by the at least one processor cause the at least one processor to perform at least the following operations: receiving an input at a generator neural network trained for generating map item positions; generating, with the generator neural network, a probability of placing a map item for each subregion of a plurality of subregions of the region of the virtual map; and generating position data of map items for placement on the virtual map using the probability for each subregion.Type: GrantFiled: May 28, 2020Date of Patent: October 18, 2022Assignee: ELECTRONIC ARTS INC.Inventors: Han Liu, Yiwei Zhao, Jingwen Liang, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
-
Publication number: 20220270324Abstract: A method, computer-readable storage medium, and device for generating a character model. The method comprises: receiving an input image of a reference subject; processing the input image to generate a normalized image; identifying a set of features present in the normalized image, wherein each feature in the set of features corresponds to a portion of a head or body of the reference subject; for each feature in the set of features, processing at least a portion of the normalized image including the feature by a neural network model corresponding to the feature to generate a parameter vector corresponding to the feature; and combining the parameter vectors output by respective neural network models corresponding to respective features in the set of features to generate a parameterized character model corresponding to reference subject in the input image.Type: ApplicationFiled: May 13, 2022Publication date: August 25, 2022Inventors: Igor Borovikov, Pawel Piotr Wrotek, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
-
Patent number: 11413541Abstract: According to an aspect of this specification, there is described a computer implemented method comprising: receiving input data, the input data comprising data relating to a user of a computer game; generating, based on the input data, one or more candidate challenges for the computer game; determining, using a machine-learned model, whether each of the one or more of the candidate challenges satisfies a threshold condition, wherein the threshold condition is based on a target challenge difficultly; in response to a positive determination, outputting the one or more candidate challenges that satisfy the threshold condition for use in the computer game by the user.Type: GrantFiled: June 3, 2020Date of Patent: August 16, 2022Assignee: ELECTRONIC ARTS INC.Inventors: Jesse Harder, Harold Chaput, Mohsen Sardari, Navid Aghdaie, Kazi Zaman
-
Publication number: 20220208170Abstract: A system for use in video game development to generate expressive speech audio comprises a user interface configured to receive user-input text data and a user selection of a speech style. The system includes a machine-learned synthesizer comprising a text encoder, a speech style encoder and a decoder. The machine-learned synthesizer is configured to generate one or more text encodings derived from the user-input text data, using the text encoder of the machine-learned synthesizer; generate a speech style encoding by processing a set of speech style features associated with the selected speech style using the speech style encoder of the machine-learned synthesizer; combine the one or more text encodings and the speech style encoding to generate one or more combined encodings; and decode the one or more combined encodings with the decoder of the machine-learned synthesizer to generate predicted acoustic features.Type: ApplicationFiled: February 28, 2022Publication date: June 30, 2022Inventors: Siddharth Gururani, Kilol Gupta, Dhaval Shah, Zahra Shakeri, Jervis Pinto, Mohsen Sardari, Navid Aghdaie, Kazi Zaman
-
Patent number: 11367254Abstract: A method, computer-readable storage medium, and device for generating a character model. The method comprises: receiving an input image of a reference subject; processing the input image to generate a normalized image; identifying a set of features present in the normalized image, wherein each feature in the set of features corresponds to a portion of a head or body of the reference subject; for each feature in the set of features, processing at least a portion of the normalized image including the feature by a neural network model corresponding to the feature to generate a parameter vector corresponding to the feature; and combining the parameter vectors output by respective neural network models corresponding to respective features in the set of features to generate a parameterized character model corresponding to reference subject in the input image.Type: GrantFiled: April 21, 2020Date of Patent: June 21, 2022Assignee: Electronic Arts Inc.Inventors: Igor Borovikov, Pawel Piotr Wrotek, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
-
Patent number: 11295721Abstract: A system for use in video game development to generate expressive speech audio comprises a user interface configured to receive user-input text data and a user selection of a speech style. The system includes a machine-learned synthesizer comprising a text encoder, a speech style encoder and a decoder. The machine-learned synthesizer is configured to generate one or more text encodings derived from the user-input text data, using the text encoder of the machine-learned synthesizer; generate a speech style encoding by processing a set of speech style features associated with the selected speech style using the speech style encoder of the machine-learned synthesizer; combine the one or more text encodings and the speech style encoding to generate one or more combined encodings; and decode the one or more combined encodings with the decoder of the machine-learned synthesizer to generate predicted acoustic features.Type: GrantFiled: April 3, 2020Date of Patent: April 5, 2022Assignee: ELECTRONIC ARTS INC.Inventors: Siddharth Gururani, Kilol Gupta, Dhaval Shah, Zahra Shakeri, Jervis Pinto, Mohsen Sardari, Navid Aghdaie, Kazi Zaman
-
Publication number: 20220054943Abstract: According to a first aspect of this specification, there is disclosed a computer implemented method comprising: training, based on an initial behavior goal and using reinforcement-learning, a reinforcement-learning model for controlling behavior of a non-playable character in a computer game environment; converting the trained reinforcement-learning model into a behavior tree model for controlling behavior of the non-playable character; editing, based on a user input, the behavior tree model to generate an updated behavior tree model for controlling behavior of the non-playable character; and outputting a final model for controlling non-player character behavior for use in the computer game environment, wherein the model for controlling non-player character behavior is based at least in part on the updated behavior tree model.Type: ApplicationFiled: August 21, 2020Publication date: February 24, 2022Inventors: Meng Wu, Harold Chaput, Navid Aghdaie, Kazi Zaman, Yunqi Zhao, Qilian Yu
-
Publication number: 20220012244Abstract: A videogame metrics query system, and related method, has one or more databases and a speculative cache. The system stores videogame metrics and tracks queries relating to videogame metrics. The system generates multiple queries, based on a received query and tracked queries. The system generates a combined query that has greater computational efficiency of execution. From executing the combined query, the system extracts query results relevant to the received query, and caches remaining results in the speculative cache.Type: ApplicationFiled: July 9, 2020Publication date: January 13, 2022Applicant: ELECTRONIC ARTS INC.Inventors: Serena Wang, Kaiyu Liu, Yu Jin, Sundeep Narravula, Harold Chaput, Navid Aghdaie, Kazi Zaman
-
Patent number: 11217001Abstract: A method, computer-readable storage medium, and device for generating an animation sequence are disclosed. The method comprises: receiving an input animation sequence, wherein the input animation sequence comprises character position information over a series of frames and a first style tag; executing an encoder to process the input animation sequence to generate a compressed representation of the input animation sequence, wherein the compressed representation of the input animation sequence comprises a vector representing the input animation sequence; and executing a decoder to generate an output animation sequence, wherein executing the decoder is based on the compressed representation of the input animation sequence, wherein the output animation sequence comprises character position information over a series of frames, and wherein the output animation sequence is based on the input animation sequence and comprises a second style tag.Type: GrantFiled: June 9, 2020Date of Patent: January 4, 2022Assignee: Electronic Arts Inc.Inventors: Yiwei Zhao, Igor Borovikov, Maziar Sanjabi, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
-
Publication number: 20210379493Abstract: According to an aspect of this specification, there is described a computer implemented method comprising: receiving input data, the input data comprising data relating to a user of a computer game; generating, based on the input data, one or more candidate challenges for the computer game; determining, using a machine-learned model, whether each of the one or more of the candidate challenges satisfies a threshold condition, wherein the threshold condition is based on a target challenge difficultly; in response to a positive determination, outputting the one or more candidate challenges that satisfy the threshold condition for use in the computer game by the user.Type: ApplicationFiled: June 3, 2020Publication date: December 9, 2021Inventors: Jesse Harder, Harold Chaput, Mohsen Sardari, Navid Aghdaie, Kazi Zaman
-
Publication number: 20210383585Abstract: A method, computer-readable storage medium, and device for generating an animation sequence are disclosed. The method comprises: receiving an input animation sequence, wherein the input animation sequence comprises character position information over a series of frames and a first style tag; executing an encoder to process the input animation sequence to generate a compressed representation of the input animation sequence, wherein the compressed representation of the input animation sequence comprises a vector representing the input animation sequence; and executing a decoder to generate an output animation sequence, wherein executing the decoder is based on the compressed representation of the input animation sequence, wherein the output animation sequence comprises character position information over a series of frames, and wherein the output animation sequence is based on the input animation sequence and comprises a second style tag.Type: ApplicationFiled: June 9, 2020Publication date: December 9, 2021Inventors: Yiwei Zhao, Igor Borovikov, Maziar Sanjabi, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
-
Patent number: 11179631Abstract: A computer-implemented method for providing video game content is provided. The method comprises monitoring a request rate of requests to provide video game content; and in response to the request rate exceeding a threshold request rate: initialising at least one instance of a first machine learning model, wherein the first machine learning model is configured to provide an output which is approximate to the output of a second machine learning model from which the first machine learning model is derived, the first machine learning model being produced by a model derivation process to have a faster response time compared to the second machine learning model; and providing video game content, wherein providing the video game content comprises generating an output responsive to the specified input using the at least one instance of the first machine learning model.Type: GrantFiled: March 18, 2020Date of Patent: November 23, 2021Assignee: ELECTRONIC ARTS INC.Inventors: Tushar Bansal, Reza Pourabolghasem, Sundeep Narravula, Navid Aghdaie, Kazi Zaman