Patents by Inventor Ivan Belonogov

Ivan Belonogov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11410364
    Abstract: Provided are systems and methods for realistic head turns and face animation synthesis. An example method may include receiving frames of a source video with the head and the face of a source actor. The method may then proceed with generating sets of source pose parameters that represent positions of the head and facial expressions of the source actor. The method may further include receiving at least one target image including the target head and the target face of a target person, determining target identity information associated with the target face, and generating an output video based on the target identity information and the sets of source pose parameters. Each frame of the output video can include an image of the target face modified to mimic at least one of the positions of the head of the source actor and at least one of facial expressions of the source actor.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: August 9, 2022
    Assignee: Snap Inc.
    Inventors: Yurii Volkov, Pavel Savchenkov, Maxim Lukin, Ivan Belonogov, Nikolai Smirnov, Aleksandr Mashrabov
  • Patent number: 11157557
    Abstract: An example method for searching and ranking personalized videos commence with receiving a user request via a communication chat between a user and another user. The user request includes a phrase or emoji. The method performs, based on the user request, a search of a pool of personalized videos to determine a subset of relevant personalized videos. The personalized videos are associated with text messages. The method further includes determining first rankings of the relevant personalized videos. The method then proceed with selecting, based on the first rankings, a pre-determined number of personalized videos from the subset of relevant personalized videos. The method then determines second rankings of the selected personalized videos and present the selected personalized videos within the communication chat in an order based on the second rankings. The personalized videos of the first subpool and the personalized videos of the second subpool are ranked independently.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: October 26, 2021
    Assignee: Snap Inc.
    Inventors: Alexander Mashrabov, Evgenii Krokhalev, Sofia Savinova, Ivan Babanin, Ivan Belonogov
  • Publication number: 20200233903
    Abstract: An example method for searching and ranking personalized videos commence with receiving a user request via a communication chat between a user and another user. The user request includes a phrase or emoji. The method performs, based on the user request, a search of a pool of personalized videos to determine a subset of relevant personalized videos. The personalized videos are associated with text messages. The method further includes determining first rankings of the relevant personalized videos. The method then proceed with selecting, based on the first rankings, a pre-determined number of personalized videos from the subset of relevant personalized videos. The method then determines second rankings of the selected personalized videos and present the selected personalized videos within the communication chat in an order based on the second rankings. The personalized videos of the first subpool and the personalized videos of the second subpool are ranked independently.
    Type: Application
    Filed: October 30, 2019
    Publication date: July 23, 2020
    Inventors: Alexander Mashrabov, Evgenii Krokhalev, Sofia Savinova, Ivan Babanin, Ivan Belonogov
  • Publication number: 20200234480
    Abstract: Provided are systems and methods for realistic head turns and face animation synthesis. An example method may include receiving frames of a source video with the head and the face of a source actor. The method may then proceed with generating sets of source pose parameters that represent positions of the head and facial expressions of the source actor. The method may further include receiving at least one target image including the target head and the target face of a target person, determining target identity information associated with the target face, and generating an output video based on the target identity information and the sets of source pose parameters. Each frame of the output video can include an image of the target face modified to mimic at least one of the positions of the head of the source actor and at least one of facial expressions of the source actor.
    Type: Application
    Filed: October 24, 2019
    Publication date: July 23, 2020
    Inventors: Yurii Volkov, Pavel Savchenkov, Maxim Lukin, Ivan Belonogov, Nikolai Smirnov, Aleksandr Mashrabov