Patents by Inventor Alexey Pchelnikov

Alexey Pchelnikov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240078838
    Abstract: Provided are systems and methods for face reenactment. An example method includes receiving a target video that includes at least one target frame, where the at least one target frame includes a target face, receiving a scenario including a series of source facial expressions, determining, based on the target face, a target facial expression of the target face, synthesizing, based on a parametric face model and a texture model, an output face including the target face, where the target facial expression of the target face is modified to imitate a source facial expression of the series of source facial expressions, and generating, based on the output face, a frame of an output video. The parametric face model includes a template mesh pre-generated based on historical images of faces of a plurality of individuals, where the template mesh includes a pre-determined number of vertices.
    Type: Application
    Filed: November 15, 2023
    Publication date: March 7, 2024
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Publication number: 20240071131
    Abstract: The subject technology displays first augmented reality content on a computing device, the first augmented reality content comprising a first output media content. The subject technology provides for display a plurality of selectable graphical items, each of the selectable graphical items corresponding to a different augmented reality content including a set of media content modified utilizing facial synthesis. The subject technology receives a selection of one of the plurality of selectable graphical items. The subject technology, based at least in part on the selection, identifies second augmented reality content. The subject technology provides the second augmented reality content for display on the computing device.
    Type: Application
    Filed: November 2, 2023
    Publication date: February 29, 2024
    Inventors: Ivan Babanin, Valerii Fisiun, Diana Maksimova, Alexey Pchelnikov
  • Patent number: 11861936
    Abstract: Provided are systems and methods for face reenactment. An example method includes receiving visual data including a visible portion of a source face, determining, based on the visible portion of the source face, a first portion of source face parameters associated with a parametric face model, where the first portion corresponds to the visible portion, predicting, based partially on the visible portion of the source face, a second portion of the source face parameters, where the second portion corresponds to the rest of the source face, receiving a target video that includes a target face, determining, based on the target video, target face parameters associated with the parametric face model and corresponding to the target face, and synthesizing, using the parametric face model, based on the source face parameters and the target face parameters, an output face that includes the source face imitating a facial expression of the target face.
    Type: Grant
    Filed: July 21, 2022
    Date of Patent: January 2, 2024
    Assignee: Snap Inc.
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Patent number: 11816926
    Abstract: The subject technology displays first augmented reality content on a computing device, the first augmented reality content comprising a first output media content. The subject technology provides for display a plurality of selectable graphical items, each of the selectable graphical items corresponding to a different augmented reality content including a set of media content modified utilizing facial synthesis. The subject technology receives a selection of one of the plurality of selectable graphical items. The subject technology, based at least in part on the selection, identifies second augmented reality content. The subject technology provides the second augmented reality content for display on the computing device.
    Type: Grant
    Filed: March 25, 2022
    Date of Patent: November 14, 2023
    Assignee: SNAP INC.
    Inventors: Ivan Babanin, Valerii Fisiun, Diana Maksimova, Alexey Pchelnikov
  • Publication number: 20230290098
    Abstract: Disclosed are systems and methods for template-based generation of personalized videos. An example method includes receiving a sequence of frame images, face area parameters corresponding to positions of a face area in a frame image of the sequence of frame images, and facial landmark parameters corresponding to the frame image of the sequence of frame images, receiving an image of a source face, modifying, based on the facial landmark parameters corresponding to the frame image, the image of the source face to obtain a further face image featuring the source face adopting a facial expression corresponding to the facial landmark parameters, and inserting the further face image into the frame image at a position determined by the face area parameters corresponding to the frame image, thereby generating an output frame of an output video.
    Type: Application
    Filed: May 22, 2023
    Publication date: September 14, 2023
    Inventors: Victor Shaburov, Alexander Mashrabov, Dmitriy Matov, Sofia Savinova, Alexey Pchelnikov, Roman Golobokov
  • Patent number: 11694417
    Abstract: Disclosed are systems and methods for template-based generation of personalized videos. An example method may commence with receiving video configuration data including a sequence of frame images, a sequence of face area parameters defining positions of a face area in the frame images, and a sequence of skin masks defining positions of a skin area of a part of the at least one body in the frame images. The method may continue with receiving an image of a source face. The method may further include determining color data associated with the source face. The method may include recoloring the skin area of the part of the at least one body in the frame image and inserting the image of the source face into the frame image at a position determined by face area parameters corresponding to the frame image to generate an output frame of an output video.
    Type: Grant
    Filed: February 18, 2022
    Date of Patent: July 4, 2023
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Alexander Mashrabov, Dmitriy Matov, Sofia Savinova, Alexey Pchelnikov, Roman Golobokov
  • Publication number: 20220358784
    Abstract: Provided are systems and methods for face reenactment. An example method includes receiving visual data including a visible portion of a source face, determining, based on the visible portion of the source face, a first portion of source face parameters associated with a parametric face model, where the first portion corresponds to the visible portion, predicting, based partially on the visible portion of the source face, a second portion of the source face parameters, where the second portion corresponds to the rest of the source face, receiving a target video that includes a target face, determining, based on the target video, target face parameters associated with the parametric face model and corresponding to the target face, and synthesizing, using the parametric face model, based on the source face parameters and the target face parameters, an output face that includes the source face imitating a facial expression of the target face.
    Type: Application
    Filed: July 21, 2022
    Publication date: November 10, 2022
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Publication number: 20220319060
    Abstract: The subject technology receives frames of a source media content, the frames of the source media content including representations of a head and a face of a source actor. The subject technology generates sets of source pose parameters. The subject technology receives at least one target image, the at least one target image including representations of a target head and a target face of a target entity. The subject technology generates, based at least in part on the sets of source pose parameters, an output media content, each frame of the output media content includes an image of the target face in at least one frame of the output media content. The subject technology provides an online advertisement based at least in part on the output media content for display on a computing device.
    Type: Application
    Filed: March 24, 2022
    Publication date: October 6, 2022
    Inventors: Alexandr Marinenko, Aleksandr Mashrabov, Alexey Pchelnikov
  • Publication number: 20220319229
    Abstract: The subject technology displays first augmented reality content on a computing device, the first augmented reality content comprising a first output media content. The subject technology provides for display a plurality of selectable graphical items, each of the selectable graphical items corresponding to a different augmented reality content including a set of media content modified utilizing facial synthesis. The subject technology receives a selection of one of the plurality of selectable graphical items. The subject technology, based at least in part on the selection, identifies second augmented reality content. The subject technology provides the second augmented reality content for display on the computing device.
    Type: Application
    Filed: March 25, 2022
    Publication date: October 6, 2022
    Inventors: Ivan Babanin, Valerii Fisiun, Diana Maksimova, Alexey Pchelnikov
  • Patent number: 11410457
    Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a scenario including a series of source facial expressions, determining, based on the target face, one or more target facial expressions, and synthesizing, using the parametric face model, an output face. The output face includes the target face. The one or more target facial expressions are modified to imitate the source facial expressions. The method further includes generating, based on a deep neural network, a mouth region and an eyes region, and combining the output face, the mouth region, and the eyes region to generate a frame of an output video.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: August 9, 2022
    Assignee: Snap Inc.
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Publication number: 20220172449
    Abstract: Disclosed are systems and methods for template-based generation of personalized videos. An example method may commence with receiving video configuration data including a sequence of frame images, a sequence of face area parameters defining positions of a face area in the frame images, and a sequence of skin masks defining positions of a skin area of a part of the at least one body in the frame images. The method may continue with receiving an image of a source face. The method may further include determining color data associated with the source face. The method may include recoloring the skin area of the part of the at least one body in the frame image and inserting the image of the source face into the frame image at a position determined by face area parameters corresponding to the frame image to generate an output frame of an output video.
    Type: Application
    Filed: February 18, 2022
    Publication date: June 2, 2022
    Inventors: Victor Shaburov, Alexander Mashrabov, Dmitriy Matov, Sofia Savinova, Alexey Pchelnikov, Roman Golobokov
  • Patent number: 11288880
    Abstract: Disclosed are systems and methods for template-based generation of personalized videos. An example method may commence with receiving video configuration data including a sequence of frame images, a sequence of face area parameters defining positions of a face area in the frame images, and a sequence of facial landmark parameters defining positions of facial landmarks in the frame images. The method may continue with receiving an image of a source face. The method may further include generating an output video. The generation of the output video may include modifying a frame image of the sequence of frame images. Specifically, the image of the source face may be modified to obtain a further image featuring the source face adopting a facial expression corresponding to the facial landmark parameters. The further image may be inserted into the frame image at a position determined by face area parameters corresponding to the frame image.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: March 29, 2022
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Alexander Mashrabov, Dmitriy Matov, Sofia Savinova, Alexey Pchelnikov, Roman Golobokov
  • Publication number: 20210012090
    Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a scenario including a series of source facial expressions, determining, based on the target face, one or more target facial expressions, and synthesizing, using the parametric face model, an output face. The output face includes the target face. The one or more target facial expressions are modified to imitate the source facial expressions. The method further includes generating, based on a deep neural network, a mouth region and an eyes region, and combining the output face, the mouth region, and the eyes region to generate a frame of an output video.
    Type: Application
    Filed: September 28, 2020
    Publication date: January 14, 2021
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Patent number: 10789453
    Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a source video including a source face. The method includes determining, based on the target face, a target facial expression. The method includes determining, based on the source face, a source facial expression. The method includes synthesizing, using the parametric face model, an output face. The output face including the target face wherein the target facial expression is modified to imitate the source facial expression. The method includes generating, based on a deep neural network, mouth and eyes regions, and combining the output face, the mouth, and eyes regions to generate a frame of an output video.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: September 29, 2020
    Assignee: Snap Inc.
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Publication number: 20200234508
    Abstract: Disclosed are systems and methods for template-based generation of personalized videos. An example method may commence with receiving video configuration data including a sequence of frame images, a sequence of face area parameters defining positions of a face area in the frame images, and a sequence of facial landmark parameters defining positions of facial landmarks in the frame images. The method may continue with receiving an image of a source face. The method may further include generating an output video. The generation of the output video may include modifying a frame image of the sequence of frame images. Specifically, the image of the source face may be modified to obtain a further image featuring the source face adopting a facial expression corresponding to the facial landmark parameters. The further image may be inserted into the frame image at a position determined by face area parameters corresponding to the frame image.
    Type: Application
    Filed: October 23, 2019
    Publication date: July 23, 2020
    Inventors: Victor Shaburov, Alexander Mashrabov, Dmitriy Matov, Sofia Savinova, Alexey Pchelnikov, Roman Golobkov
  • Publication number: 20200234034
    Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a source video including a source face. The method includes determining, based on the target face, a target facial expression. The method includes determining, based on the source face, a source facial expression. The method includes synthesizing, using the parametric face model, an output face. The output face including the target face wherein the target facial expression is modified to imitate the source facial expression. The method includes generating, based on a deep neural network, mouth and eyes regions, and combining the output face, the mouth, and eyes regions to generate a frame of an output video.
    Type: Application
    Filed: January 18, 2019
    Publication date: July 23, 2020
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov