Patents by Inventor Dmitry Matov

Dmitry Matov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240153227
    Abstract: The technical problem of creating an augmented reality (AR) experience that, on one hand, is accessible from a camera view user interface provided with a messaging client and that, also, can perform a modification based on a previously captured image of a user, is addressed by providing an AR component. When a user, while accessing the messaging client, engages a user selectable element representing the AR component in the camera view user interface, the messaging system loads the AR component in the messaging client. The AR component comprises a target media content object, which can be animation or live action video. The loaded AR component accesses a portrait image associated with a user and modifies the target media content using the portrait image. The resulting target media content object is displayed in the camera view user interface.
    Type: Application
    Filed: January 4, 2024
    Publication date: May 9, 2024
    Inventors: Roman Golobokov, Aleksandr Mashrabov, Dmitry Matov, Jeremy Baker Voss
  • Publication number: 20240078838
    Abstract: Provided are systems and methods for face reenactment. An example method includes receiving a target video that includes at least one target frame, where the at least one target frame includes a target face, receiving a scenario including a series of source facial expressions, determining, based on the target face, a target facial expression of the target face, synthesizing, based on a parametric face model and a texture model, an output face including the target face, where the target facial expression of the target face is modified to imitate a source facial expression of the series of source facial expressions, and generating, based on the output face, a frame of an output video. The parametric face model includes a template mesh pre-generated based on historical images of faces of a plurality of individuals, where the template mesh includes a pre-determined number of vertices.
    Type: Application
    Filed: November 15, 2023
    Publication date: March 7, 2024
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Patent number: 11869164
    Abstract: The technical problem of creating an augmented reality (AR) experience that, on one hand, is accessible from a camera view user interface provided with a messaging client and that, also, can perform a modification based on a previously captured image of a user, is addressed by providing an AR component. When a user, while accessing the messaging client, engages a user selectable element representing the AR component in the camera view user interface, the messaging system loads the AR component in the messaging client. The AR component comprises a target media content object, which can be animation or live action video. The loaded AR component accesses a portrait image associated with a user and modifies the target media content using the portrait image. The resulting target media content object is displayed in the camera view user interface.
    Type: Grant
    Filed: May 25, 2022
    Date of Patent: January 9, 2024
    Assignee: Snap Inc.
    Inventors: Roman Golobokov, Aleksandr Mashrabov, Dmitry Matov, Jeremy Baker Voss
  • Patent number: 11861936
    Abstract: Provided are systems and methods for face reenactment. An example method includes receiving visual data including a visible portion of a source face, determining, based on the visible portion of the source face, a first portion of source face parameters associated with a parametric face model, where the first portion corresponds to the visible portion, predicting, based partially on the visible portion of the source face, a second portion of the source face parameters, where the second portion corresponds to the rest of the source face, receiving a target video that includes a target face, determining, based on the target video, target face parameters associated with the parametric face model and corresponding to the target face, and synthesizing, using the parametric face model, based on the source face parameters and the target face parameters, an output face that includes the source face imitating a facial expression of the target face.
    Type: Grant
    Filed: July 21, 2022
    Date of Patent: January 2, 2024
    Assignee: Snap Inc.
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Publication number: 20220358784
    Abstract: Provided are systems and methods for face reenactment. An example method includes receiving visual data including a visible portion of a source face, determining, based on the visible portion of the source face, a first portion of source face parameters associated with a parametric face model, where the first portion corresponds to the visible portion, predicting, based partially on the visible portion of the source face, a second portion of the source face parameters, where the second portion corresponds to the rest of the source face, receiving a target video that includes a target face, determining, based on the target video, target face parameters associated with the parametric face model and corresponding to the target face, and synthesizing, using the parametric face model, based on the source face parameters and the target face parameters, an output face that includes the source face imitating a facial expression of the target face.
    Type: Application
    Filed: July 21, 2022
    Publication date: November 10, 2022
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Publication number: 20220292794
    Abstract: The technical problem of creating an augmented reality (AR) experience that, on one hand, is accessible from a camera view user interface provided with a messaging client and that, also, can perform a modification based on a previously captured image of a user, is addressed by providing an AR component. When a user, while accessing the messaging client, engages a user selectable element representing the AR component in the camera view user interface, the messaging system loads the AR component in the messaging client. The AR component comprises a target media content object, which can be animation or live action video. The loaded AR component accesses a portrait image associated with a user and modifies the target media content using the portrait image. The resulting target media content object is displayed in the camera view user interface.
    Type: Application
    Filed: May 25, 2022
    Publication date: September 15, 2022
    Inventors: Roman Golobokov, Aleksandr Mashrabov, Dmitry Matov, Jeremy Baker Voss
  • Patent number: 11410457
    Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a scenario including a series of source facial expressions, determining, based on the target face, one or more target facial expressions, and synthesizing, using the parametric face model, an output face. The output face includes the target face. The one or more target facial expressions are modified to imitate the source facial expressions. The method further includes generating, based on a deep neural network, a mouth region and an eyes region, and combining the output face, the mouth region, and the eyes region to generate a frame of an output video.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: August 9, 2022
    Assignee: Snap Inc.
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Patent number: 11354872
    Abstract: The technical problem of creating an augmented reality (AR) experience that, on one hand, is accessible from a camera view user interface provided with a messaging client and that, also, can perform a modification based on a previously captured image of a user, is addressed by providing an AR component. When a user, while accessing the messaging client, engages a user selectable element representing the AR component in the camera view user interface, the messaging system loads the AR component in the messaging client. The AR component comprises a target media content object, which can be animation or live action video. The loaded AR component accesses a portrait image associated with a user and modifies the target media content using the portrait image. The resulting target media content object is displayed in the camera view user interface.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: June 7, 2022
    Assignee: Snap Inc.
    Inventors: Roman Golobokov, Aleksandr Mashrabov, Dmitry Matov, Jeremy Baker Voss
  • Publication number: 20220148276
    Abstract: The technical problem of creating an augmented reality (AR) experience that, on one hand, is accessible from a camera view user interface provided with a messaging client and that, also, can perform a modification based on a previously captured image of a user, is addressed by providing an AR component. When a user, while accessing the messaging client, engages a user selectable element representing the AR component in the camera view user interface, the messaging system loads the AR component in the messaging client. The AR component comprises a target media content object, which can be animation or live action video. The loaded AR component accesses a portrait image associated with a user and modifies the target media content using the portrait image. The resulting target media content object is displayed in the camera view user interface.
    Type: Application
    Filed: November 11, 2020
    Publication date: May 12, 2022
    Inventors: Roman Golobokov, Aleksandr Mashrabov, Dmitry Matov, Jeremy Baker Voss
  • Publication number: 20210012090
    Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a scenario including a series of source facial expressions, determining, based on the target face, one or more target facial expressions, and synthesizing, using the parametric face model, an output face. The output face includes the target face. The one or more target facial expressions are modified to imitate the source facial expressions. The method further includes generating, based on a deep neural network, a mouth region and an eyes region, and combining the output face, the mouth region, and the eyes region to generate a frame of an output video.
    Type: Application
    Filed: September 28, 2020
    Publication date: January 14, 2021
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Patent number: 10789453
    Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a source video including a source face. The method includes determining, based on the target face, a target facial expression. The method includes determining, based on the source face, a source facial expression. The method includes synthesizing, using the parametric face model, an output face. The output face including the target face wherein the target facial expression is modified to imitate the source facial expression. The method includes generating, based on a deep neural network, mouth and eyes regions, and combining the output face, the mouth, and eyes regions to generate a frame of an output video.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: September 29, 2020
    Assignee: Snap Inc.
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Publication number: 20200234034
    Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a source video including a source face. The method includes determining, based on the target face, a target facial expression. The method includes determining, based on the source face, a source facial expression. The method includes synthesizing, using the parametric face model, an output face. The output face including the target face wherein the target facial expression is modified to imitate the source facial expression. The method includes generating, based on a deep neural network, mouth and eyes regions, and combining the output face, the mouth, and eyes regions to generate a frame of an output video.
    Type: Application
    Filed: January 18, 2019
    Publication date: July 23, 2020
    Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
  • Publication number: 20180182434
    Abstract: Methods and systems for generating video previews are provided. In one embodiment, a method includes acquiring a video. The method allows extracting features of the video. The method further includes determining, based on the features, a genre of the video. The method can proceed with selecting, based on the features and the genre, a time fragment of the video. The method further includes cropping the time fragment to a rectangular shape to fit a screen of a mobile device positioned vertically. The method further includes compressing the cropped fragment into low bitrate video fragment.
    Type: Application
    Filed: December 27, 2016
    Publication date: June 28, 2018
    Inventors: Aleksei Esin, Dmitry Matov, Grigorii Fefelov, Eugene Krokhalev
  • Publication number: 20170364492
    Abstract: A web content enrichment system can match an image to text of web content. When the text of web content includes a snippet, the image matched to the text enriches the snippet to enhance results of a search engine. When the text of web content includes text contained in a webpage, the image matched to this text enriches the webpage to enhance user perception and understanding of the webpage. The process of matching images to text involves extracting features of a plurality of images and features of a plurality of text documents, calculating scores of the images based on the extracted features, and selecting one image per text document based on the scores using a machine-learning algorithm. The result of the matching can be provided to a web content module for storing, incorporating into the result lists of the search engine, or delivery to a user.
    Type: Application
    Filed: June 20, 2016
    Publication date: December 21, 2017
    Inventors: Philipp Pushnyakov, Eugene Krokhalev, Dmitry Matov