Patents by Inventor Wolfram Sebastian Starke

Wolfram Sebastian Starke has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230394735
    Abstract: Embodiments of the systems and methods described herein provide a dynamic animation generation system that can apply a real-life video clip with a character in motion to a first neural network to receive rough motion data, such as pose information, for each of the frames of the video clip, and overlay the pose information on top of the video clip to generate a modified video clip. The system can identify a sliding window that includes a current frame, past frames, and future frames of the modified video clip, and apply the modified video clip to a second neural network to predict a next frame. The dynamic animation generation system can then move the sliding window to the next frame while including the predicted next frame, and apply the new sliding window to the second neural network to predict the following frame to the next frame.
    Type: Application
    Filed: June 5, 2023
    Publication date: December 7, 2023
    Inventors: Mingyi Shi, Yiwei Zhao, Wolfram Sebastian Starke, Mohsen Sardari, Navid Aghdaie
  • Patent number: 11830121
    Abstract: In some embodiments, the dynamic animation generation system can provide a deep learning framework to produce a large variety of martial arts movements in a controllable manner from unstructured motion capture data. The system can imitate animation layering using neural networks with the aim to overcome challenges when mixing, blending and editing movements from unaligned motion sources. The system can synthesize movements from given reference motions and simple user controls, and generate unseen sequences of locomotion, but also reconstruct signature motions of different fighters. For achieving this task, the dynamic animation generation system can adopt a modular framework that is composed of the motion generator, that maps the trajectories of a number of key joints and root trajectory to the full body motion, and a set of different control modules that map the user inputs to such trajectories.
    Type: Grant
    Filed: July 1, 2021
    Date of Patent: November 28, 2023
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Wolfram Sebastian Starke, Yiwei Zhao, Mohsen Sardari, Harold Henry Chaput, Navid Aghdaie
  • Patent number: 11816772
    Abstract: System and methods for using a deep learning framework to customize animation of an in-game character of a video game. The system can be preconfigured with animation rule sets corresponding to various animations. Each animation can be comprised of a series of distinct poses that collectively form the particular animation. The system can provide an animation-editing interface that enables a user of the video game to make modifications to at least one pose or frame of the animation. The system can realistically extrapolate these modifications across some or all portions of the animation. In addition or alternatively, the system can realistically extrapolate the modifications across other types of animations.
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: November 14, 2023
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Wolfram Sebastian Starke, Harold Henry Chaput
  • Publication number: 20230316615
    Abstract: A computing system may provide functionality for controlling an animated model to perform actions and to perform transitions therebetween. The system may determine, from among a plurality of edges from a first node of a control graph to respective other nodes of the control graph, a selected edge from the first control node to a selected node. The system may then determine controls for an animated model in a simulation based at least in part on the selected edge, control data associated with the selected node, a current simulation state of the simulation, and a machine learned algorithm, determine an updated simulation state of the simulation based at least in part on the controls for the animated model, and adapt one or more parameters of the machine learned algorithm based at least in part on the updated simulation state and a desired simulation state.
    Type: Application
    Filed: March 31, 2022
    Publication date: October 5, 2023
    Inventors: Zhaoming Xie, Wolfram Sebastian Starke, Harold Henry Chaput
  • Publication number: 20230310998
    Abstract: The present disclosure provides a periodic autoencoder that can be used to generate a general motion manifold structure using local periodicity of the movement whose parameters are composed of phase, frequency, and amplitude. The periodic autoencoder is a novel neural network architecture that can learn periodic features from large unstructured motion datasets in an unsupervised manner. The character movements can be decomposed into multiple latent channels that can capture the non-linear periodicity of different body segments during synchronous, asynchronous, and transition movements while progressing forward in time, such that it captures spatial data and temporal data associated with the movements.
    Type: Application
    Filed: March 31, 2022
    Publication date: October 5, 2023
    Inventors: Wolfram Sebastian Starke, Harold Henry Chaput
  • Publication number: 20230267668
    Abstract: The present disclosure provides embodiments for joint twist generation for animation. The system can utilize a neural network, also referred to as a deep neural network, which utilizes machine learning processes in order to create animation data that are more life-like and realistic. The system can obtain a set of axis vectors for a rig of a virtual character model; obtain a twist model for the rig; input the set of axis vectors to the twist model to obtain a set of twist vectors; and determine animation data based on the set of axis vectors and the set of twist vectors.
    Type: Application
    Filed: February 23, 2022
    Publication date: August 24, 2023
    Inventors: Wolfram Sebastian Starke, Harold Henry Chaput, Yiwei Zhao
  • Publication number: 20230237724
    Abstract: Systems and methods are provided for enhanced animation generation based on using motion mapping with local bone phases. An example method includes accessing first animation control information generated for a first frame of an electronic game including local bone phases representing phase information associated with contacts of a plurality of rigid bodies of an in-game character with an in-game environment. Executing a local motion matching process for each of the plurality of local bone phases and generating a second pose of the character model based on the plurality of matched local poses for a second frame of the electronic game.
    Type: Application
    Filed: January 23, 2023
    Publication date: July 27, 2023
    Inventors: Wolfram Sebastian Starke, Yiwei Zhao, Mohsen Sardari, Harold Henry Chaput, Navid Aghdaie
  • Publication number: 20230186541
    Abstract: System and methods for using a deep learning framework to customize animation of an in-game character of a video game. The system can be preconfigured with animation rule sets corresponding to various animations. Each animation can be comprised of a series of distinct poses that collectively form the particular animation. The system can provide an animation-editing interface that enables a user of the video game to make modifications to at least one pose or frame of the animation. The system can realistically extrapolate these modifications across some or all portions of the animation. In addition or alternatively, the system can realistically extrapolate the modifications across other types of animations.
    Type: Application
    Filed: December 13, 2021
    Publication date: June 15, 2023
    Inventors: Wolfram Sebastian Starke, Harold Henry Chaput
  • Publication number: 20230186543
    Abstract: Use of pose prediction models enables runtime animation to be generated for an electronic game. The pose prediction model can predict a character pose of a character based on joint data for a pose of the character in a previous frame. Further, by using environment data, it is possible to modify the prediction of the character pose based on a particular environment in which the character is location. Advantageously, the use of machine learning enables prediction of character movement in environment in which it is difficult or impossible to obtain motion capture data.
    Type: Application
    Filed: December 14, 2021
    Publication date: June 15, 2023
    Inventors: Wolfram Sebastian Starke, Harold Henry Chaput
  • Publication number: 20230177755
    Abstract: Systems and methods for identifying one or more facial expression parameters associated with a pose of a character are disclosed. A system may execute a game development application to identify facial expression parameters for a particular pose of a character. The system may receive an input identifying the pose of the character. Further, the system may provide the input to a machine learning model. The machine learning model may be trained based on a plurality of poses and expected facial expression parameters for each pose. Further, the machine learning model can identify a latent representation of the input. Based on the latent representation of the input, the machine learning model can generate one or more facial expression parameters of the character and output the one or more facial expression parameters. The system may also generate a facial expression of the character and output the facial expression.
    Type: Application
    Filed: December 7, 2021
    Publication date: June 8, 2023
    Inventors: Wolfram Sebastian Starke, Igor Borovikov, Harold Henry Chaput
  • Patent number: 11670030
    Abstract: Embodiments of the systems and methods described herein provide a dynamic animation generation system that can apply a real-life video clip with a character in motion to a first neural network to receive rough motion data, such as pose information, for each of the frames of the video clip, and overlay the pose information on top of the video clip to generate a modified video clip. The system can identify a sliding window that includes a current frame, past frames, and future frames of the modified video clip, and apply the modified video clip to a second neural network to predict a next frame. The dynamic animation generation system can then move the sliding window to the next frame while including the predicted next frame, and apply the new sliding window to the second neural network to predict the following frame to the next frame.
    Type: Grant
    Filed: July 1, 2021
    Date of Patent: June 6, 2023
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Mingyi Shi, Yiwei Zhao, Wolfram Sebastian Starke, Mohsen Sardari, Navid Aghdaie
  • Publication number: 20230033290
    Abstract: Systems and methods are provided for enhanced animation generation based on using motion mapping with local bone phases. An example method includes accessing first animation control information generated for a first frame of an electronic game including local bone phases representing phase information associated with contacts of a plurality of rigid bodies of an in-game character with an in-game environment. Executing a local motion matching process for each of the plurality of local bone phases and generating a second pose of the character model based on the plurality of matched local poses for a second frame of the electronic game.
    Type: Application
    Filed: August 2, 2021
    Publication date: February 2, 2023
    Inventors: Wolfram Sebastian Starke, Yiwei Zhao, Mohsen Sardari, Harold Henry Chaput, Navid Aghdaie
  • Patent number: 11562523
    Abstract: Systems and methods are provided for enhanced animation generation based on using motion mapping with local bone phases. An example method includes accessing first animation control information generated for a first frame of an electronic game including local bone phases representing phase information associated with contacts of a plurality of rigid bodies of an in-game character with an in-game environment. Executing a local motion matching process for each of the plurality of local bone phases and generating a second pose of the character model based on the plurality of matched local poses for a second frame of the electronic game.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: January 24, 2023
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Wolfram Sebastian Starke, Yiwei Zhao, Mohsen Sardari, Harold Henry Chaput, Navid Aghdaie
  • Publication number: 20230005203
    Abstract: Embodiments of the systems and methods described herein provide a dynamic animation generation system that can apply a real-life video clip with a character in motion to a first neural network to receive rough motion data, such as pose information, for each of the frames of the video clip, and overlay the pose information on top of the video clip to generate a modified video clip. The system can identify a sliding window that includes a current frame, past frames, and future frames of the modified video clip, and apply the modified video clip to a second neural network to predict a next frame. The dynamic animation generation system can then move the sliding window to the next frame while including the predicted next frame, and apply the new sliding window to the second neural network to predict the following frame to the next frame.
    Type: Application
    Filed: July 1, 2021
    Publication date: January 5, 2023
    Inventors: Mingyi Shi, Yiwei Zhao, Wolfram Sebastian Starke, Mohsen Sardari, Navid Aghdaie
  • Patent number: 11321895
    Abstract: Digital character animation automated generation techniques are described that are implemented by an animation generation system via a computing device. These techniques enable the animation generation system to generate an animation of a digital character automatically and without user intervention responsive to a user input of a target action such that the digital character is capable of performing a complex set of actions in a precise and realistic manner within an environment contained within digital content, e.g., an animation as part of a digital video.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: May 3, 2022
    Assignee: Adobe Inc.
    Inventors: Wolfram Sebastian Starke, Jun Saito
  • Publication number: 20210375021
    Abstract: Digital character animation automated generation techniques are described that are implemented by an animation generation system via a computing device. These techniques enable the animation generation system to generate an animation of a digital character automatically and without user intervention responsive to a user input of a target action such that the digital character is capable of performing a complex set of actions in a precise and realistic manner within an environment contained within digital content, e.g., an animation as part of a digital video.
    Type: Application
    Filed: May 29, 2020
    Publication date: December 2, 2021
    Applicant: Adobe Inc.
    Inventors: Wolfram Sebastian Starke, Jun Saito