Patents by Inventor Igor Borovikov

Igor Borovikov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11724184
    Abstract: A graphics rendering system is disclosed for generating and streaming graphics data of a 3D environment from a server for rendering on a client in 2.5D. 2D textures can be transmitted in advance of frames showing the textures. Data transmitted for each frame can include 2D vertex positions of 2D meshes and depth data. The 2D vertex positions can be positions on a 2D projection as seen from a viewpoint within the 3D environment. Data for each frame can include changes to vertex positions and/or depth data. A prediction system can be used to predict when new objects will be displayed, and textures of those new objects can be transmitted in advance.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: August 15, 2023
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Igor Borovikov, Mohsen Sardari
  • Patent number: 11679334
    Abstract: The present disclosure provides a system that automatically analyzes telemetric data, biometric data, and other data associated with a gameplay session to identify events occurring during the gameplay session. The telemetric data is generated by the game application during the gameplay session. The biometric data can be generated by input devices and can generate data associated with the user. The system can be configured to identify the segments associated with recorded gameplay events from the gameplay session and use the gameplay data associated with the events to create and output video data for a gameplay segment.
    Type: Grant
    Filed: August 12, 2021
    Date of Patent: June 20, 2023
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Mohamed Marwan Mattar, Igor Borovikov
  • Publication number: 20230177755
    Abstract: Systems and methods for identifying one or more facial expression parameters associated with a pose of a character are disclosed. A system may execute a game development application to identify facial expression parameters for a particular pose of a character. The system may receive an input identifying the pose of the character. Further, the system may provide the input to a machine learning model. The machine learning model may be trained based on a plurality of poses and expected facial expression parameters for each pose. Further, the machine learning model can identify a latent representation of the input. Based on the latent representation of the input, the machine learning model can generate one or more facial expression parameters of the character and output the one or more facial expression parameters. The system may also generate a facial expression of the character and output the facial expression.
    Type: Application
    Filed: December 7, 2021
    Publication date: June 8, 2023
    Inventors: Wolfram Sebastian Starke, Igor Borovikov, Harold Henry Chaput
  • Patent number: 11648477
    Abstract: A method, computer-readable storage medium, and device for generating a character model. The method comprises: receiving an input image of a reference subject; processing the input image to generate a normalized image; identifying a set of features present in the normalized image, wherein each feature in the set of features corresponds to a portion of a head or body of the reference subject; for each feature in the set of features, processing at least a portion of the normalized image including the feature by a neural network model corresponding to the feature to generate a parameter vector corresponding to the feature; and combining the parameter vectors output by respective neural network models corresponding to respective features in the set of features to generate a parameterized character model corresponding to reference subject in the input image.
    Type: Grant
    Filed: May 13, 2022
    Date of Patent: May 16, 2023
    Assignee: Electronic Arts Inc.
    Inventors: Igor Borovikov, Pawel Piotr Wrotek, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Patent number: 11574557
    Abstract: Various aspects of the subject technology relate to systems, methods, and machine-readable media for learning a foreign language. The method includes executing a video game in a first human language. The method includes pausing gameplay of the video game for a paused time instance. The method includes executing a digital mini-puzzle game during the paused time instance in the gameplay of the video game, the digital mini-puzzle game executed in a second human language, the digital mini-puzzle game executed utilizing assets of the video game. The method includes receiving a response to the digital mini-puzzle game from a player-computing device corresponding to a player, the response comprising at least one of the first human language and/or the second human language. The method includes determining a score of the response corresponding to the player based at least in part on a comparison of the response with translation pairs in a database.
    Type: Grant
    Filed: August 3, 2021
    Date of Patent: February 7, 2023
    Assignee: Electronic Arts Inc.
    Inventors: Igor Borovikov, Mohsen Sardari
  • Publication number: 20230009378
    Abstract: An imitation learning system may learn how to play a video game based on user interactions by a tester or other user of the video game. The imitation learning system may develop an imitation learning model based, at least in part, on the tester's interaction with the video game and the corresponding state of the video game to determine or predict actions that may be performed when interacting with the video game. The imitation learning system may use the imitation learning model to control automated agents that can play additional instances of the video game. Further, as the user continues to interact with the video game during testing, the imitation learning model may continue to be updated. Thus, the interactions by the automated agents with the video game may, over time, almost mimic the interaction by the user enabling multiple tests of the video game to be performed simultaneously.
    Type: Application
    Filed: August 12, 2022
    Publication date: January 12, 2023
    Inventors: Igor Borovikov, Jesse Hans Stokes Harder, Thomas Patrick O'Neill, Jonathan Albert Rein, Avery H. Lee, Pawel Piotr Wrotek, Graham Michael Parker, David Vincent
  • Patent number: 11504619
    Abstract: A video reenactment system and method analyze a video clip that a video game player wishes to reenact and maps objects and actions within the video clip to virtual objects and virtual actions within the video game. A reenactment script indicating a sequence of virtual objects and virtual actions as mapped to objects and actions in the video clip is generated using a video translation model and stored for use in reenacting the video clip. The reenactment script can be used within the video game to reenact the objects and actions of the video clip. The reenactment of the video clip may be interactive, where a player may assume control within the reenactment and when the player relinquishes control, the reenactment will continue at an appropriate part of the sequence of actions by skipping actions corresponding to the ones played by the player.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: November 22, 2022
    Assignee: Electronic Arts Inc.
    Inventors: Igor Borovikov, Harold Henry Chaput, Nitish Victor, Mohsen Sardari
  • Patent number: 11446570
    Abstract: An imitation learning system may learn how to play a video game based on user interactions by a tester or other user of the video game. The imitation learning system may develop an imitation learning model based, at least in part, on the tester's interaction with the video game and the corresponding state of the video game to determine or predict actions that may be performed when interacting with the video game. The imitation learning system may use the imitation learning model to control automated agents that can play additional instances of the video game. Further, as the user continues to interact with the video game during testing, the imitation learning model may continue to be updated. Thus, the interactions by the automated agents with the video game may, over time, almost mimic the interaction by the user enabling multiple tests of the video game to be performed simultaneously.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: September 20, 2022
    Assignee: Electronic Arts Inc.
    Inventors: Igor Borovikov, Jesse Hans Stokes Harder, Thomas Patrick O'Neill, Jonathan Albert Rein, Avery H. Lee, Pawel Piotr Wrotek, Graham Michael Parker, David Vincent
  • Publication number: 20220270324
    Abstract: A method, computer-readable storage medium, and device for generating a character model. The method comprises: receiving an input image of a reference subject; processing the input image to generate a normalized image; identifying a set of features present in the normalized image, wherein each feature in the set of features corresponds to a portion of a head or body of the reference subject; for each feature in the set of features, processing at least a portion of the normalized image including the feature by a neural network model corresponding to the feature to generate a parameter vector corresponding to the feature; and combining the parameter vectors output by respective neural network models corresponding to respective features in the set of features to generate a parameterized character model corresponding to reference subject in the input image.
    Type: Application
    Filed: May 13, 2022
    Publication date: August 25, 2022
    Inventors: Igor Borovikov, Pawel Piotr Wrotek, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Patent number: 11367254
    Abstract: A method, computer-readable storage medium, and device for generating a character model. The method comprises: receiving an input image of a reference subject; processing the input image to generate a normalized image; identifying a set of features present in the normalized image, wherein each feature in the set of features corresponds to a portion of a head or body of the reference subject; for each feature in the set of features, processing at least a portion of the normalized image including the feature by a neural network model corresponding to the feature to generate a parameter vector corresponding to the feature; and combining the parameter vectors output by respective neural network models corresponding to respective features in the set of features to generate a parameterized character model corresponding to reference subject in the input image.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: June 21, 2022
    Assignee: Electronic Arts Inc.
    Inventors: Igor Borovikov, Pawel Piotr Wrotek, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Patent number: 11276216
    Abstract: Systems and methods for generating a customized virtual animal character are disclosed. A system may obtain video data or other media depicting a real animal, and then may provide the obtained media to one or more machine learning models configured to learn visual appearance and behavior information regarding the particular animal depicted in the video or other media. The system may then generate a custom visual appearance model and a custom behavior model corresponding to the real animal, which may subsequently be used to render, within a virtual environment of a video game, a virtual animal character that resembles the real animal in appearance and in-game behavior.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: March 15, 2022
    Assignee: Electronic Arts Inc.
    Inventors: Igor Borovikov, Mohsen Sardari, Harold Henry Chaput, Navid Aghdaie, Kazi Atif-Uz Zaman, Kenneth Alan Moss
  • Publication number: 20220032202
    Abstract: The present disclosure provides a system that automatically analyzes telemetric data, biometric data, and other data associated with a gameplay session to identify events occurring during the gameplay session. The telemetric data is generated by the game application during the gameplay session. The biometric data can be generated by input devices and can generate data associated with the user. The system can be configured to identify the segments associated with recorded gameplay events from the gameplay session and use the gameplay data associated with the events to create and output video data for a gameplay segment.
    Type: Application
    Filed: August 12, 2021
    Publication date: February 3, 2022
    Inventors: Mohamed Marwan Mattar, Igor Borovikov
  • Patent number: 11217001
    Abstract: A method, computer-readable storage medium, and device for generating an animation sequence are disclosed. The method comprises: receiving an input animation sequence, wherein the input animation sequence comprises character position information over a series of frames and a first style tag; executing an encoder to process the input animation sequence to generate a compressed representation of the input animation sequence, wherein the compressed representation of the input animation sequence comprises a vector representing the input animation sequence; and executing a decoder to generate an output animation sequence, wherein executing the decoder is based on the compressed representation of the input animation sequence, wherein the output animation sequence comprises character position information over a series of frames, and wherein the output animation sequence is based on the input animation sequence and comprises a second style tag.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: January 4, 2022
    Assignee: Electronic Arts Inc.
    Inventors: Yiwei Zhao, Igor Borovikov, Maziar Sanjabi, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Publication number: 20210383585
    Abstract: A method, computer-readable storage medium, and device for generating an animation sequence are disclosed. The method comprises: receiving an input animation sequence, wherein the input animation sequence comprises character position information over a series of frames and a first style tag; executing an encoder to process the input animation sequence to generate a compressed representation of the input animation sequence, wherein the compressed representation of the input animation sequence comprises a vector representing the input animation sequence; and executing a decoder to generate an output animation sequence, wherein executing the decoder is based on the compressed representation of the input animation sequence, wherein the output animation sequence comprises character position information over a series of frames, and wherein the output animation sequence is based on the input animation sequence and comprises a second style tag.
    Type: Application
    Filed: June 9, 2020
    Publication date: December 9, 2021
    Inventors: Yiwei Zhao, Igor Borovikov, Maziar Sanjabi, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Publication number: 20210346798
    Abstract: An imitation learning system may learn how to play a video game based on user interactions by a tester or other user of the video game. The imitation learning system may develop an imitation learning model based, at least in part, on the tester's interaction with the video game and the corresponding state of the video game to determine or predict actions that may be performed when interacting with the video game. The imitation learning system may use the imitation learning model to control automated agents that can play additional instances of the video game. Further, as the user continues to interact with the video game during testing, the imitation learning model may continue to be updated. Thus, the interactions by the automated agents with the video game may, over time, almost mimic the interaction by the user enabling multiple tests of the video game to be performed simultaneously.
    Type: Application
    Filed: May 8, 2020
    Publication date: November 11, 2021
    Inventors: Igor Borovikov, Jesse Hans Stokes Harder, Thomas Patrick O'Neill, Jonathan Albert Rein, Avery H. Lee, Pawel Piotr Wrotek, Graham Michael Parker, David Vincent
  • Publication number: 20210327135
    Abstract: A method, computer-readable storage medium, and device for generating a character model. The method comprises: receiving an input image of a reference subject; processing the input image to generate a normalized image; identifying a set of features present in the normalized image, wherein each feature in the set of features corresponds to a portion of a head or body of the reference subject; for each feature in the set of features, processing at least a portion of the normalized image including the feature by a neural network model corresponding to the feature to generate a parameter vector corresponding to the feature; and combining the parameter vectors output by respective neural network models corresponding to respective features in the set of features to generate a parameterized character model corresponding to reference subject in the input image.
    Type: Application
    Filed: April 21, 2020
    Publication date: October 21, 2021
    Inventors: Igor Borovikov, Pawel Piotr Wrotek, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kazi Zaman
  • Publication number: 20210316212
    Abstract: A graphics rendering system is disclosed for generating and streaming graphics data of a 3D environment from a server for rendering on a client in 2.5D. 2D textures can be transmitted in advance of frames showing the textures. Data transmitted for each frame can include 2D vertex positions of 2D meshes and depth data. The 2D vertex positions can be positions on a 2D projection as seen from a viewpoint within the 3D environment. Data for each frame can include changes to vertex positions and/or depth data. A prediction system can be used to predict when new objects will be displayed, and textures of those new objects can be transmitted in advance.
    Type: Application
    Filed: April 26, 2021
    Publication date: October 14, 2021
    Inventors: Igor Borovikov, Mohsen Sardari
  • Patent number: 11110353
    Abstract: System and methods for utilizing a video game console to monitor the player's video game, detect when a particular gameplay situation occurs during the player's video game experience, and collect game state data corresponding to how the player reacts to the particular gameplay situation or an effect of the reaction. In some cases, the video game console can receive an exploratory rule set to apply during the particular gameplay situation. In some cases, the video game console can trigger the particular gameplay situation. A system can receive the game state data from many video game consoles and train a rule set based on the game state data. Advantageously, the system can save computational resources by utilizing the players' video game experience to train the rule set.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: September 7, 2021
    Assignee: Electronic Arts Inc.
    Inventors: Caedmon Somers, Jason Rupert, Igor Borovikov, Ahmad Beirami, Yunqi Zhao, Mohsen Sardari, John Kolen, Navid Aghdaie, Kazi Atif-Uz Zaman
  • Patent number: 11090568
    Abstract: The present disclosure provides a system that automatically analyzes telemetric data, biometric data, and other data associated with a gameplay session to identify events occurring during the gameplay session. The telemetric data is generated by the game application during the gameplay session. The biometric data can be generated by input devices and can generate data associated with the user. The system can be configured to identify the segments associated with recorded gameplay events from the gameplay session and use the gameplay data associated with the events to create and output video data for a gameplay segment.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: August 17, 2021
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Mohamed Marwan Mattar, Igor Borovikov
  • Patent number: 10987579
    Abstract: A graphics rendering system is disclosed for generating and streaming graphics data of a 3D environment from a server for rendering on a client in 2.5D. 2D textures can be transmitted in advance of frames showing the textures. Data transmitted for each frame can include 2D vertex positions of 2D meshes and depth data. The 2D vertex positions can be positions on a 2D projection as seen from a viewpoint within the 3D environment. Data for each frame can include changes to vertex positions and/or depth data. A prediction system can be used to predict when new objects will be displayed, and textures of those new objects can be transmitted in advance.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: April 27, 2021
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Igor Borovikov, Mohsen Sardari