Patents by Inventor Pavel Savchenkov

Pavel Savchenkov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210327404
    Abstract: Provided are systems and methods for text and audio-based real-time face reenactment. An example method includes receiving an input text and a target image, the target image including a target face; generating, based on the input text, a sequence of sets of acoustic features representing the input text; generating, based on the sequence of sets of acoustic features, a sequence of sets of mouth key points; generating, based on the sequence of sets of mouth key points, a sequence of sets of facial key points; generating, by the computing device and based on the sequence of sets of the facial key points and the target image, a sequence of frames; and generating, based on the sequence of frames, an output video. Each of the frames includes the target face modified based on at least one set of mouth key points of the sequence of sets of mouth key points.
    Type: Application
    Filed: June 23, 2021
    Publication date: October 21, 2021
    Inventors: Pavel Savchenkov, Maxim Lukin, Aleksandr Mashrabov
  • Publication number: 20210327117
    Abstract: Provided are systems and methods for photorealistic real-time portrait animation. An example method includes receiving a scenario video with at least one input frame. The input frame includes a first face of a first person. The method further includes receiving a target image with a second face of a second person. The method further includes determining, based on the at least one input frame and the target image, two-dimensional (2D) deformations of the second face and a background in the target image. The 2D deformations, when applied to the second face, modify the second face to imitate at least a facial expression and a head orientation of the first face. The method further includes applying the 2D deformations to the target image to obtain at least one output frame of an output video.
    Type: Application
    Filed: May 20, 2021
    Publication date: October 21, 2021
    Inventors: Eugene Krokhalev, Aleksandr Mashrabov, Pavel Savchenkov
  • Patent number: 11114086
    Abstract: Provided are systems and methods for text and audio-based real-time face reenactment. An example method includes receiving an input text and a target image, the target image including a target face; generating, based on the input text, a sequence of sets of acoustic features representing the input text; determining, based on the sequence of sets of acoustic features, a sequence of sets of scenario data indicating modifications of the target face for pronouncing the input text; generating, based on the sequence of sets of scenario data, a sequence of frames, wherein each of the frames includes the target face modified based on at least one of the sets of scenario data; generating, based on the sequence of frames, an output video; and synthesizing, based on the sequence of sets of acoustic features, an audio data and adding the audio data to the output video.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: September 7, 2021
    Assignee: Snap Inc.
    Inventors: Pavel Savchenkov, Maxim Lukin, Aleksandr Mashrabov
  • Patent number: 11049310
    Abstract: Provided are systems and methods for photorealistic real-time portrait animation. An example method includes receiving a scenario video with at least one input frame. The input frame includes a first face. The method further includes receiving a target image with a second face. The method further includes determining, based on the at least one input frame and the target image, two-dimensional (2D) deformations, wherein the 2D deformations, when applied to the second face, modify the second face to imitate at least a facial expression and a head orientation of the first face. The method further includes applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: June 29, 2021
    Assignee: Snap Inc.
    Inventors: Eugene Krokhalev, Aleksandr Mashrabov, Pavel Savchenkov
  • Publication number: 20210012090
    Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a scenario including a series of source facial expressions, determining, based on the target face, one or more target facial expressions, and synthesizing, using the parametric face model, an output face. The output face includes the target face. The one or more target facial expressions are modified to imitate the source facial expressions. The method further includes generating, based on a deep neural network, a mouth region and an eyes region, and combining the output face, the mouth region, and the eyes region to generate a frame of an output video.
    Type: Application
    Filed: September 28, 2020
    Publication date: January 14, 2021
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Patent number: 10789453
    Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a source video including a source face. The method includes determining, based on the target face, a target facial expression. The method includes determining, based on the source face, a source facial expression. The method includes synthesizing, using the parametric face model, an output face. The output face including the target face wherein the target facial expression is modified to imitate the source facial expression. The method includes generating, based on a deep neural network, mouth and eyes regions, and combining the output face, the mouth, and eyes regions to generate a frame of an output video.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: September 29, 2020
    Assignee: Snap Inc.
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Publication number: 20200234690
    Abstract: Provided are systems and methods for text and audio-based real-time face reenactment. An example method includes receiving an input text and a target image, the target image including a target face; generating, based on the input text, a sequence of sets of acoustic features representing the input text; determining, based on the sequence of sets of acoustic features, a sequence of sets of scenario data indicating modifications of the target face for pronouncing the input text; generating, based on the sequence of sets of scenario data, a sequence of frames, wherein each of the frames includes the target face modified based on at least one of the sets of scenario data; generating, based on the sequence of frames, an output video; and synthesizing, based on the sequence of sets of acoustic features, an audio data and adding the audio data to the output video.
    Type: Application
    Filed: July 11, 2019
    Publication date: July 23, 2020
    Inventors: Pavel Savchenkov, Maxim Lukin, Aleksandr Mashrabov
  • Publication number: 20200234034
    Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a source video including a source face. The method includes determining, based on the target face, a target facial expression. The method includes determining, based on the source face, a source facial expression. The method includes synthesizing, using the parametric face model, an output face. The output face including the target face wherein the target facial expression is modified to imitate the source facial expression. The method includes generating, based on a deep neural network, mouth and eyes regions, and combining the output face, the mouth, and eyes regions to generate a frame of an output video.
    Type: Application
    Filed: January 18, 2019
    Publication date: July 23, 2020
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Publication number: 20200234480
    Abstract: Provided are systems and methods for realistic head turns and face animation synthesis. An example method may include receiving frames of a source video with the head and the face of a source actor. The method may then proceed with generating sets of source pose parameters that represent positions of the head and facial expressions of the source actor. The method may further include receiving at least one target image including the target head and the target face of a target person, determining target identity information associated with the target face, and generating an output video based on the target identity information and the sets of source pose parameters. Each frame of the output video can include an image of the target face modified to mimic at least one of the positions of the head of the source actor and at least one of facial expressions of the source actor.
    Type: Application
    Filed: October 24, 2019
    Publication date: July 23, 2020
    Inventors: Yurii Volkov, Pavel Savchenkov, Maxim Lukin, Ivan Belonogov, Nikolai Smirnov, Aleksandr Mashrabov
  • Publication number: 20200234482
    Abstract: Provided are systems and methods for photorealistic real-time portrait animation. An example method includes receiving a scenario video with at least one input frame. The input frame includes a first face. The method further includes receiving a target image with a second face. The method further includes determining, based on the at least one input frame and the target image, two-dimensional (2D) deformations, wherein the 2D deformations, when applied to the second face, modify the second face to imitate at least a facial expression and a head orientation of the first face. The method further includes applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video.
    Type: Application
    Filed: January 18, 2019
    Publication date: July 23, 2020
    Inventors: Eugene Krokhalev, Aleksandr Mashrabov, Pavel Savchenkov
  • Patent number: 9881208
    Abstract: Provided are methods and system for recognizing characters such as mathematical expressions or chemical formulas. An example method comprises the steps of receiving and processing an image by a pre-processing module to obtain one or more candidate regions, extracting features of each of the candidate regions by a feature extracting module such as a convolutional neural network (CNN), encoding the features into a distributive representation for each of the candidate regions separately using an encoding module such as a first long short-term memory (LSTM) based neural network, decoding the distributive representation into output representations using a decoding module such as a second LSTM-based recurrent neural network, and combining the output representations into an output expression, which is outputted in a computer-readable format or a markup language.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: January 30, 2018
    Assignee: Machine Learning Works, LLC
    Inventors: Pavel Savchenkov, Evgeny Savinov, Mikhail Trofimov, Sergey Kiyan, Aleksei Esin
  • Publication number: 20170364744
    Abstract: Provided are methods and system for recognizing characters such as mathematical expressions or chemical formulas. An example method comprises the steps of receiving and processing an image by a pre-processing module to obtain one or more candidate regions, extracting features of each of the candidate regions by a feature extracting module such as a convolutional neural network (CNN), encoding the features into a distributive representation for each of the candidate regions separately using an encoding module such as a first long short-term memory (LSTM) based neural network, decoding the distributive representation into output representations using a decoding module such as a second LSTM-based recurrent neural network, and combining the output representations into an output expression, which is outputted in a computer-readable format or a markup language.
    Type: Application
    Filed: June 20, 2016
    Publication date: December 21, 2017
    Inventors: Pavel Savchenkov, Evgeny Savinov, Mikhail Trofimov, Sergey Kiyan, Aleksei Esin