Patents by Inventor Elaheh Akhoundi

Elaheh Akhoundi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230398456
    Abstract: Systems and methods are provided for enhanced pose generation based on generative modeling. An example method includes accessing an autoencoder trained based on poses of real-world persons, each pose being defined based on location information associated with joints, with the autoencoder being trained to map an input pose to a feature encoding associated with a latent feature space. Information identifying, at least, a first pose and a second pose associated with a character configured for inclusion in an in-game world is obtained via user input, with each of the poses being defined based on location information associated with the joints and with the joints being included on a skeleton associated with the character. Feature encodings associated with the first pose and the second pose are generated based on the autoencoder. Output poses are generated based on transition information associated with the first pose and the second pose.
    Type: Application
    Filed: May 11, 2023
    Publication date: December 14, 2023
    Inventor: Elaheh Akhoundi
  • Patent number: 11836843
    Abstract: Systems and methods are provided for enhanced pose generation based on conditional modeling of inverse kinematics. An example method includes accessing an autoencoder trained based on poses, with each pose being defined based on location information of joints, and the autoencoder being trained based on conditional information indicating positions of a subset of the joints. The autoencoder is trained to reconstruct, via a latent variable space, each pose based on the conditional information. Information specifying positions of the subset of the joints is obtained via an interactive user interface and the latent variable space is sampled. An output is generated for inclusion in the interactive user interface based on the sampling and the positions.
    Type: Grant
    Filed: December 30, 2021
    Date of Patent: December 5, 2023
    Assignee: Electronic Arts Inc.
    Inventors: Elaheh Akhoundi, Fabio Zinno
  • Publication number: 20230316616
    Abstract: This specification relates to the generation of animation data using recurrent neural networks. According to a first aspect of this specification, there is described a computer implemented method comprising: sampling an initial hidden state of a recurrent neural network (RNN) from a distribution; generating, using the RNN, a sequence of frames of animation from the initial state of the RNN and an initial set of animation data comprising a known initial frame of animation, the generating comprising, for each generated frame of animation in the sequence of frames of animation: inputting, into the RNN, a respective set of animation data comprising the previous frame of animation data in the sequence of frames of animation; generating, using the RNN and based on a current hidden state of the RNN, the frame of animation data; and updating the hidden state of the RNN based on the input respective set of animation data.
    Type: Application
    Filed: March 31, 2022
    Publication date: October 5, 2023
    Inventor: Elaheh Akhoundi
  • Patent number: 11648480
    Abstract: Systems and methods are provided for enhanced pose generation based on generative modeling. An example method includes accessing an autoencoder trained based on poses of real-world persons, each pose being defined based on location information associated with joints, with the autoencoder being trained to map an input pose to a feature encoding associated with a latent feature space. Information identifying, at least, a first pose and a second pose associated with a character configured for inclusion in an in-game world is obtained via user input, with each of the poses being defined based on location information associated with the joints and with the joints being included on a skeleton associated with the character. Feature encodings associated with the first pose and the second pose are generated based on the autoencoder. Output poses are generated based on transition information associated with the first pose and the second pose.
    Type: Grant
    Filed: April 6, 2020
    Date of Patent: May 16, 2023
    Assignee: Electronic Arts Inc.
    Inventor: Elaheh Akhoundi
  • Patent number: 11625880
    Abstract: According to a first aspect of this specification, there is described a computer-implemented method of tagging video frames. The method comprises generating, using a frame tagging model, a tag for each of a plurality of frames of an animation sequence. The frame tagging model comprises: a first neural network portion configured to process, for each frame of the plurality of frames, a plurality of features associated with the frame and generate an encoded representation for the frame. The frame tagging model further comprises a second neural network portion configured to receive input comprising the encoded representations of each frame and generate output indicative of a tag for each of the plurality of frames.
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: April 11, 2023
    Assignee: Electronic Arts Inc.
    Inventor: Elaheh Akhoundi
  • Publication number: 20220254083
    Abstract: According to a first aspect of this specification, there is described a computer-implemented method of tagging video frames. The method comprises generating, using a frame tagging model, a tag for each of a plurality of frames of an animation sequence. The frame tagging model comprises: a first neural network portion configured to process, for each frame of the plurality of frames, a plurality of features associated with the frame and generate an encoded representation for the frame. The frame tagging model further comprises a second neural network portion configured to receive input comprising the encoded representations of each frame and generate output indicative of a tag for each of the plurality of frames.
    Type: Application
    Filed: February 9, 2021
    Publication date: August 11, 2022
    Inventor: Elaheh Akhoundi
  • Publication number: 20220198733
    Abstract: Systems and methods are provided for enhanced pose generation based on conditional modeling of inverse kinematics. An example method includes accessing an autoencoder trained based on poses, with each pose being defined based on location information of joints, and the autoencoder being trained based on conditional information indicating positions of a subset of the joints. The autoencoder is trained to reconstruct, via a latent variable space, each pose based on the conditional information. Information specifying positions of the subset of the joints is obtained via an interactive user interface and the latent variable space is sampled. An output is generated for inclusion in the interactive user interface based on the sampling and the positions.
    Type: Application
    Filed: December 30, 2021
    Publication date: June 23, 2022
    Inventors: Elaheh Akhoundi, Fabio Zinno
  • Patent number: 11232621
    Abstract: Systems and methods are provided for enhanced animation generation based on conditional modeling. An example method includes accessing an autoencoder trained based on poses and conditional information associated with the poses, each pose being defined based on location information associated with joints, and the conditional information for each pose reflecting prior poses of the pose, with the autoencoder being trained to reconstruct, via a latent variable space, each pose based on the conditional information. Poses in a sequence of poses, are obtained via an interactive user interface, and the latent variable space is sampled. An output pose is generated based on the sampling, the output pose being included in the interactive user interface.
    Type: Grant
    Filed: April 6, 2020
    Date of Patent: January 25, 2022
    Assignee: Electronic Arts Inc.
    Inventors: Elaheh Akhoundi, Fabio Zinno
  • Patent number: 11217003
    Abstract: Systems and methods are provided for enhanced pose generation based on conditional modeling of inverse kinematics. An example method includes accessing an autoencoder trained based on poses, with each pose being defined based on location information of joints, and the autoencoder being trained based on conditional information indicating positions of a subset of the joints. The autoencoder is trained to reconstruct, via a latent variable space, each pose based on the conditional information. Information specifying positions of the subset of the joints is obtained via an interactive user interface and the latent variable space is sampled. An output is generated for inclusion in the interactive user interface based on the sampling and the positions.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: January 4, 2022
    Assignee: Electronic Arts Inc.
    Inventors: Elaheh Akhoundi, Fabio Zinno
  • Publication number: 20210312689
    Abstract: Systems and methods are provided for enhanced pose generation based on conditional modeling of inverse kinematics. An example method includes accessing an autoencoder trained based on poses, with each pose being defined based on location information of joints, and the autoencoder being trained based on conditional information indicating positions of a subset of the joints. The autoencoder is trained to reconstruct, via a latent variable space, each pose based on the conditional information. Information specifying positions of the subset of the joints is obtained via an interactive user interface and the latent variable space is sampled. An output is generated for inclusion in the interactive user interface based on the sampling and the positions.
    Type: Application
    Filed: April 30, 2020
    Publication date: October 7, 2021
    Inventors: Elaheh Akhoundi, Fabio Zinno
  • Publication number: 20210308580
    Abstract: Systems and methods are provided for enhanced pose generation based on generative modeling. An example method includes accessing an autoencoder trained based on poses of real-world persons, each pose being defined based on location information associated with joints, with the autoencoder being trained to map an input pose to a feature encoding associated with a latent feature space. Information identifying, at least, a first pose and a second pose associated with a character configured for inclusion in an in-game world is obtained via user input, with each of the poses being defined based on location information associated with the joints and with the joints being included on a skeleton associated with the character. Feature encodings associated with the first pose and the second pose are generated based on the autoencoder. Output poses are generated based on transition information associated with the first pose and the second pose.
    Type: Application
    Filed: April 6, 2020
    Publication date: October 7, 2021
    Inventor: Elaheh Akhoundi
  • Publication number: 20210312688
    Abstract: Systems and methods are provided for enhanced animation generation based on conditional modeling. An example method includes accessing an autoencoder trained based on poses and conditional information associated with the poses, each pose being defined based on location information associated with joints, and the conditional information for each pose reflecting prior poses of the pose, with the autoencoder being trained to reconstruct, via a latent variable space, each pose based on the conditional information. Poses in a sequence of poses, are obtained via an interactive user interface, and the latent variable space is sampled. An output pose is generated based on the sampling, the output pose being included in the interactive user interface.
    Type: Application
    Filed: April 6, 2020
    Publication date: October 7, 2021
    Inventors: Elaheh Akhoundi, Fabio Zinno