Patents by Inventor Pavel Savchenkov

Pavel Savchenkov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104789
    Abstract: A method of generating an image for use in a conversation taking place in a messaging application is disclosed. Conversation input text is received from a user of a portable device that includes a display. Model input text is generated from the conversation input text, which is processed with a text-to-image model to generate an image based on the model input text. The coordinates of a face in the image are determined, and the face of the user or another person is added to the image at the location. The final image is displayed on the portable device, and user input is received to transmit the image to a remote recipient.
    Type: Application
    Filed: September 22, 2022
    Publication date: March 28, 2024
    Inventors: Arnab Ghosh, Jian Ren, Pavel Savchenkov, Sergey Tulyakov
  • Publication number: 20240078838
    Abstract: Provided are systems and methods for face reenactment. An example method includes receiving a target video that includes at least one target frame, where the at least one target frame includes a target face, receiving a scenario including a series of source facial expressions, determining, based on the target face, a target facial expression of the target face, synthesizing, based on a parametric face model and a texture model, an output face including the target face, where the target facial expression of the target face is modified to imitate a source facial expression of the series of source facial expressions, and generating, based on the output face, a frame of an output video. The parametric face model includes a template mesh pre-generated based on historical images of faces of a plurality of individuals, where the template mesh includes a pre-determined number of vertices.
    Type: Application
    Filed: November 15, 2023
    Publication date: March 7, 2024
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Patent number: 11915355
    Abstract: Provided are systems and methods for realistic head turns and face animation synthesis. An example method includes receiving a source frame of a source video, where the source frame includes a head and a face of a source actor, generating source pose parameters corresponding to a pose of the head and a facial expression of the source actor; receiving a target image including a target head and a target face of a target person, determining target identity information associated with the target head and the target face of the target person, replacing source identity information in the source pose parameters with the target identity information to obtain further source pose parameters, and generating an output frame of an output video that includes a modified image of the target face and the target head adopting the pose of the head and the facial expression of the source actor.
    Type: Grant
    Filed: August 5, 2022
    Date of Patent: February 27, 2024
    Assignee: Snap Inc.
    Inventors: Yurii Volkov, Pavel Savchenkov, Nikolai Smirnov, Aleksandr Mashrabov
  • Publication number: 20240062008
    Abstract: A method of generating an image for use in a conversation taking place in a messaging application is disclosed. Conversation input text is received from a user of a portable device that includes a display. Model input text is generated from the conversation input text, which is processed with a text-to-image model to generate an image based on the model input text. The generated image is displayed on the portable device, and user input is received to transmit the image to a remote recipient.
    Type: Application
    Filed: August 17, 2022
    Publication date: February 22, 2024
    Inventors: Arnab Ghosh, Jian Ren, Pavel Savchenkov, Sergey Tulyakov
  • Patent number: 11861936
    Abstract: Provided are systems and methods for face reenactment. An example method includes receiving visual data including a visible portion of a source face, determining, based on the visible portion of the source face, a first portion of source face parameters associated with a parametric face model, where the first portion corresponds to the visible portion, predicting, based partially on the visible portion of the source face, a second portion of the source face parameters, where the second portion corresponds to the rest of the source face, receiving a target video that includes a target face, determining, based on the target video, target face parameters associated with the parametric face model and corresponding to the target face, and synthesizing, using the parametric face model, based on the source face parameters and the target face parameters, an output face that includes the source face imitating a facial expression of the target face.
    Type: Grant
    Filed: July 21, 2022
    Date of Patent: January 2, 2024
    Assignee: Snap Inc.
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Publication number: 20230351998
    Abstract: Systems and methods for text and audio-based real-time face reenactment are provided. An example method includes receiving an input text and a target image, where the target image including a target face, generating, based on the input text, a sequence of sets of acoustic features corresponding to the input text, generating, based on the sequence of sets of acoustic features, a sequence of sets of mouth key points, generating, based on the sequence of sets of mouth key points, a sequence of sets of facial key points, determining, based on the set sequence of sets of facial key points, a sequence of deformations of the target face, and applying the sequence of deformations to the target image, thereby generating a sequence of frames of an output video.
    Type: Application
    Filed: July 4, 2023
    Publication date: November 2, 2023
    Applicant: Snap Inc.
    Inventors: Pavel Savchenkov, Maxim Lukin, Aleksandr Mashrabov
  • Patent number: 11741940
    Abstract: Provided are systems and methods for text and audio-based real-time face reenactment. An example method includes receiving an input text and a target image, the target image including a target face; generating, based on the input text, a sequence of sets of acoustic features representing the input text; generating, based on the sequence of sets of acoustic features, a sequence of sets of mouth key points; generating, based on the sequence of sets of mouth key points, a sequence of sets of facial key points; generating, by the computing device and based on the sequence of sets of the facial key points and the target image, a sequence of frames; and generating, based on the sequence of frames, an output video. Each of the frames includes the target face modified based on at least one set of mouth key points of the sequence of sets of mouth key points.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: August 29, 2023
    Assignee: Snap Inc.
    Inventors: Pavel Savchenkov, Maxim Lukin, Aleksandr Mashrabov
  • Publication number: 20230110916
    Abstract: Provided are systems and methods for portrait animation. An example method includes receiving, by a computing device, a scenario video, where the scenario video includes information concerning a first face, receiving, by the computing device, a target image, where the target image includes a second face, determining, by the computing device and based on the target image and the information concerning the first face, two-dimensional (2D) deformations of the second face in the target image, and applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video.
    Type: Application
    Filed: December 14, 2022
    Publication date: April 13, 2023
    Inventors: Eugene Krokhalev, Aleksandr Mashrabov, Pavel Savchenkov
  • Patent number: 11568589
    Abstract: Disclosed are systems and methods for portrait animation. An example method includes receiving, by a computing device, a scenario video, where the scenario video includes at least one input frame and the at least one input frame includes a first face, receiving, by the computing device, a target image, where the target image includes a second face, determining, by the computing device and based on the at least one input frame and the target image, two-dimensional (2D) deformations of the second face in the target image, where the 2D deformations, when applied to the second face, modify the second face to imitate at least a facial expression of the first face, and applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video.
    Type: Grant
    Filed: May 24, 2022
    Date of Patent: January 31, 2023
    Assignee: Snap Inc.
    Inventors: Eugene Krokhalev, Aleksandr Mashrabov, Pavel Savchenkov
  • Publication number: 20220392133
    Abstract: Provided are systems and methods for realistic head turns and face animation synthesis. An example method includes receiving a source frame of a source video, where the source frame includes a head and a face of a source actor, generating source pose parameters corresponding to a pose of the head and a facial expression of the source actor; receiving a target image including a target head and a target face of a target person, determining target identity information associated with the target head and the target face of the target person, replacing source identity information in the source pose parameters with the target identity information to obtain further source pose parameters, and generating an output frame of an output video that includes a modified image of the target face and the target head adopting the pose of the head and the facial expression of the source actor.
    Type: Application
    Filed: August 5, 2022
    Publication date: December 8, 2022
    Inventors: Yurii Volkov, Pavel Savchenkov, Nikolai Smirnov, Aleksandr Mashrabov
  • Publication number: 20220358784
    Abstract: Provided are systems and methods for face reenactment. An example method includes receiving visual data including a visible portion of a source face, determining, based on the visible portion of the source face, a first portion of source face parameters associated with a parametric face model, where the first portion corresponds to the visible portion, predicting, based partially on the visible portion of the source face, a second portion of the source face parameters, where the second portion corresponds to the rest of the source face, receiving a target video that includes a target face, determining, based on the target video, target face parameters associated with the parametric face model and corresponding to the target face, and synthesizing, using the parametric face model, based on the source face parameters and the target face parameters, an output face that includes the source face imitating a facial expression of the target face.
    Type: Application
    Filed: July 21, 2022
    Publication date: November 10, 2022
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Publication number: 20220319231
    Abstract: The subject technology receives frames of a source media content, the frames of the source media content including representations of a head and a face of a source actor. The subject technology generates, based at least in part on the frames of the source media content, sets of source pose parameters. The subject technology receives at least one target image, the at least one target image including representations of a target head and a target face of a target entity. The subject technology provides the sets of source pose parameters to a neural network to determine facial landmarks for head turns and facial expressions. The subject technology generates, based at least in part on the sets of source pose parameters and the facial landmarks for head turns and facial expressions, an output media content. The subject technology provides augmented reality content based at least in part on the output media content for display on a computing device.
    Type: Application
    Filed: March 31, 2022
    Publication date: October 6, 2022
    Inventors: Alexey Pankov, Pavel Savchenkov
  • Publication number: 20220284654
    Abstract: Disclosed are systems and methods for portrait animation. An example method includes receiving, by a computing device, a scenario video, where the scenario video includes at least one input frame and the at least one input frame includes a first face, receiving, by the computing device, a target image, where the target image includes a second face, determining, by the computing device and based on the at least one input frame and the target image, two-dimensional (2D) deformations of the second face in the target image, where the 2D deformations, when applied to the second face, modify the second face to imitate at least a facial expression of the first face, and applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video.
    Type: Application
    Filed: May 24, 2022
    Publication date: September 8, 2022
    Inventors: Eugene Krokhalev, Aleksandr Mashrabov, Pavel Savchenkov
  • Publication number: 20220270332
    Abstract: A methodology for training a machine learning model to generate color-neutral input face images is described. For each training face image from a training dataset that is used for training the model, the training system generates an input face image, which has the color and lighting of a randomly selected image from the set of color source images, and which has facial features and expression of a face object from the training face image. Because, during training, the machine learning model is “confused” by changing the color and lighting of a training face image to a randomly selected different color and lighting, the trained machine learning model generates a color neutral embedding representing facial features from the training face image.
    Type: Application
    Filed: May 12, 2022
    Publication date: August 25, 2022
    Inventors: Pavel Savchenkov, Yurii Volkov, Jeremy Baker Voss
  • Patent number: 11410364
    Abstract: Provided are systems and methods for realistic head turns and face animation synthesis. An example method may include receiving frames of a source video with the head and the face of a source actor. The method may then proceed with generating sets of source pose parameters that represent positions of the head and facial expressions of the source actor. The method may further include receiving at least one target image including the target head and the target face of a target person, determining target identity information associated with the target face, and generating an output video based on the target identity information and the sets of source pose parameters. Each frame of the output video can include an image of the target face modified to mimic at least one of the positions of the head of the source actor and at least one of facial expressions of the source actor.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: August 9, 2022
    Assignee: Snap Inc.
    Inventors: Yurii Volkov, Pavel Savchenkov, Maxim Lukin, Ivan Belonogov, Nikolai Smirnov, Aleksandr Mashrabov
  • Patent number: 11410457
    Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a scenario including a series of source facial expressions, determining, based on the target face, one or more target facial expressions, and synthesizing, using the parametric face model, an output face. The output face includes the target face. The one or more target facial expressions are modified to imitate the source facial expressions. The method further includes generating, based on a deep neural network, a mouth region and an eyes region, and combining the output face, the mouth region, and the eyes region to generate a frame of an output video.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: August 9, 2022
    Assignee: Snap Inc.
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Patent number: 11393152
    Abstract: Provided are systems and methods for photorealistic real-time portrait animation. An example method includes receiving a scenario video with at least one input frame. The input frame includes a first face of a first person. The method further includes receiving a target image with a second face of a second person. The method further includes determining, based on the at least one input frame and the target image, two-dimensional (2D) deformations of the second face and a background in the target image. The 2D deformations, when applied to the second face, modify the second face to imitate at least a facial expression and a head orientation of the first face. The method further includes applying the 2D deformations to the target image to obtain at least one output frame of an output video.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: July 19, 2022
    Assignee: Snap Inc.
    Inventors: Eugene Krokhalev, Aleksandr Mashrabov, Pavel Savchenkov
  • Publication number: 20220172438
    Abstract: In some embodiments, users' experience of engaging with augmented reality technology is enhanced by providing a process, referred to as face animation synthesis, that replaces an actor's face in the frames of a video with a user's face from the user's portrait image. The resulting face in the frames of the video retains the facial expressions, as well as color and lighting, of the actor's face but, at the same time, has the likeness of the user's face. An example face animation synthesis experience can be made available to uses of a messaging system by providing a face animation synthesis augmented reality component.
    Type: Application
    Filed: November 30, 2020
    Publication date: June 2, 2022
    Inventors: Pavel Savchenkov, Yurii Volkov, Jeremy Baker Voss
  • Patent number: 11335069
    Abstract: In some embodiments, users' experience of engaging with augmented reality technology is enhanced by providing a process, referred to as face animation synthesis, that replaces an actor's face in the frames of a video with a user's face from the user's portrait image. The resulting face in the frames of the video retains the facial expressions, as well as color and lighting, of the actor's face but, at the same time, has the likeness of the user's face. An example face animation synthesis experience can be made available to uses of a messaging system by providing a face animation synthesis augmented reality component.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: May 17, 2022
    Assignee: Snap Inc.
    Inventors: Pavel Savchenkov, Yurii Volkov, Jeremy Baker Voss
  • Publication number: 20210327404
    Abstract: Provided are systems and methods for text and audio-based real-time face reenactment. An example method includes receiving an input text and a target image, the target image including a target face; generating, based on the input text, a sequence of sets of acoustic features representing the input text; generating, based on the sequence of sets of acoustic features, a sequence of sets of mouth key points; generating, based on the sequence of sets of mouth key points, a sequence of sets of facial key points; generating, by the computing device and based on the sequence of sets of the facial key points and the target image, a sequence of frames; and generating, based on the sequence of frames, an output video. Each of the frames includes the target face modified based on at least one set of mouth key points of the sequence of sets of mouth key points.
    Type: Application
    Filed: June 23, 2021
    Publication date: October 21, 2021
    Inventors: Pavel Savchenkov, Maxim Lukin, Aleksandr Mashrabov