Patents by Inventor Dmitry Matov
Dmitry Matov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240153227Abstract: The technical problem of creating an augmented reality (AR) experience that, on one hand, is accessible from a camera view user interface provided with a messaging client and that, also, can perform a modification based on a previously captured image of a user, is addressed by providing an AR component. When a user, while accessing the messaging client, engages a user selectable element representing the AR component in the camera view user interface, the messaging system loads the AR component in the messaging client. The AR component comprises a target media content object, which can be animation or live action video. The loaded AR component accesses a portrait image associated with a user and modifies the target media content using the portrait image. The resulting target media content object is displayed in the camera view user interface.Type: ApplicationFiled: January 4, 2024Publication date: May 9, 2024Inventors: Roman Golobokov, Aleksandr Mashrabov, Dmitry Matov, Jeremy Baker Voss
-
Publication number: 20240078838Abstract: Provided are systems and methods for face reenactment. An example method includes receiving a target video that includes at least one target frame, where the at least one target frame includes a target face, receiving a scenario including a series of source facial expressions, determining, based on the target face, a target facial expression of the target face, synthesizing, based on a parametric face model and a texture model, an output face including the target face, where the target facial expression of the target face is modified to imitate a source facial expression of the series of source facial expressions, and generating, based on the output face, a frame of an output video. The parametric face model includes a template mesh pre-generated based on historical images of faces of a plurality of individuals, where the template mesh includes a pre-determined number of vertices.Type: ApplicationFiled: November 15, 2023Publication date: March 7, 2024Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
-
Patent number: 11869164Abstract: The technical problem of creating an augmented reality (AR) experience that, on one hand, is accessible from a camera view user interface provided with a messaging client and that, also, can perform a modification based on a previously captured image of a user, is addressed by providing an AR component. When a user, while accessing the messaging client, engages a user selectable element representing the AR component in the camera view user interface, the messaging system loads the AR component in the messaging client. The AR component comprises a target media content object, which can be animation or live action video. The loaded AR component accesses a portrait image associated with a user and modifies the target media content using the portrait image. The resulting target media content object is displayed in the camera view user interface.Type: GrantFiled: May 25, 2022Date of Patent: January 9, 2024Assignee: Snap Inc.Inventors: Roman Golobokov, Aleksandr Mashrabov, Dmitry Matov, Jeremy Baker Voss
-
Patent number: 11861936Abstract: Provided are systems and methods for face reenactment. An example method includes receiving visual data including a visible portion of a source face, determining, based on the visible portion of the source face, a first portion of source face parameters associated with a parametric face model, where the first portion corresponds to the visible portion, predicting, based partially on the visible portion of the source face, a second portion of the source face parameters, where the second portion corresponds to the rest of the source face, receiving a target video that includes a target face, determining, based on the target video, target face parameters associated with the parametric face model and corresponding to the target face, and synthesizing, using the parametric face model, based on the source face parameters and the target face parameters, an output face that includes the source face imitating a facial expression of the target face.Type: GrantFiled: July 21, 2022Date of Patent: January 2, 2024Assignee: Snap Inc.Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
-
Publication number: 20220358784Abstract: Provided are systems and methods for face reenactment. An example method includes receiving visual data including a visible portion of a source face, determining, based on the visible portion of the source face, a first portion of source face parameters associated with a parametric face model, where the first portion corresponds to the visible portion, predicting, based partially on the visible portion of the source face, a second portion of the source face parameters, where the second portion corresponds to the rest of the source face, receiving a target video that includes a target face, determining, based on the target video, target face parameters associated with the parametric face model and corresponding to the target face, and synthesizing, using the parametric face model, based on the source face parameters and the target face parameters, an output face that includes the source face imitating a facial expression of the target face.Type: ApplicationFiled: July 21, 2022Publication date: November 10, 2022Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
-
Publication number: 20220292794Abstract: The technical problem of creating an augmented reality (AR) experience that, on one hand, is accessible from a camera view user interface provided with a messaging client and that, also, can perform a modification based on a previously captured image of a user, is addressed by providing an AR component. When a user, while accessing the messaging client, engages a user selectable element representing the AR component in the camera view user interface, the messaging system loads the AR component in the messaging client. The AR component comprises a target media content object, which can be animation or live action video. The loaded AR component accesses a portrait image associated with a user and modifies the target media content using the portrait image. The resulting target media content object is displayed in the camera view user interface.Type: ApplicationFiled: May 25, 2022Publication date: September 15, 2022Inventors: Roman Golobokov, Aleksandr Mashrabov, Dmitry Matov, Jeremy Baker Voss
-
Patent number: 11410457Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a scenario including a series of source facial expressions, determining, based on the target face, one or more target facial expressions, and synthesizing, using the parametric face model, an output face. The output face includes the target face. The one or more target facial expressions are modified to imitate the source facial expressions. The method further includes generating, based on a deep neural network, a mouth region and an eyes region, and combining the output face, the mouth region, and the eyes region to generate a frame of an output video.Type: GrantFiled: September 28, 2020Date of Patent: August 9, 2022Assignee: Snap Inc.Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
-
Patent number: 11354872Abstract: The technical problem of creating an augmented reality (AR) experience that, on one hand, is accessible from a camera view user interface provided with a messaging client and that, also, can perform a modification based on a previously captured image of a user, is addressed by providing an AR component. When a user, while accessing the messaging client, engages a user selectable element representing the AR component in the camera view user interface, the messaging system loads the AR component in the messaging client. The AR component comprises a target media content object, which can be animation or live action video. The loaded AR component accesses a portrait image associated with a user and modifies the target media content using the portrait image. The resulting target media content object is displayed in the camera view user interface.Type: GrantFiled: November 11, 2020Date of Patent: June 7, 2022Assignee: Snap Inc.Inventors: Roman Golobokov, Aleksandr Mashrabov, Dmitry Matov, Jeremy Baker Voss
-
Publication number: 20220148276Abstract: The technical problem of creating an augmented reality (AR) experience that, on one hand, is accessible from a camera view user interface provided with a messaging client and that, also, can perform a modification based on a previously captured image of a user, is addressed by providing an AR component. When a user, while accessing the messaging client, engages a user selectable element representing the AR component in the camera view user interface, the messaging system loads the AR component in the messaging client. The AR component comprises a target media content object, which can be animation or live action video. The loaded AR component accesses a portrait image associated with a user and modifies the target media content using the portrait image. The resulting target media content object is displayed in the camera view user interface.Type: ApplicationFiled: November 11, 2020Publication date: May 12, 2022Inventors: Roman Golobokov, Aleksandr Mashrabov, Dmitry Matov, Jeremy Baker Voss
-
Publication number: 20210012090Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a scenario including a series of source facial expressions, determining, based on the target face, one or more target facial expressions, and synthesizing, using the parametric face model, an output face. The output face includes the target face. The one or more target facial expressions are modified to imitate the source facial expressions. The method further includes generating, based on a deep neural network, a mouth region and an eyes region, and combining the output face, the mouth region, and the eyes region to generate a frame of an output video.Type: ApplicationFiled: September 28, 2020Publication date: January 14, 2021Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
-
Patent number: 10789453Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a source video including a source face. The method includes determining, based on the target face, a target facial expression. The method includes determining, based on the source face, a source facial expression. The method includes synthesizing, using the parametric face model, an output face. The output face including the target face wherein the target facial expression is modified to imitate the source facial expression. The method includes generating, based on a deep neural network, mouth and eyes regions, and combining the output face, the mouth, and eyes regions to generate a frame of an output video.Type: GrantFiled: January 18, 2019Date of Patent: September 29, 2020Assignee: Snap Inc.Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
-
Publication number: 20200234034Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a source video including a source face. The method includes determining, based on the target face, a target facial expression. The method includes determining, based on the source face, a source facial expression. The method includes synthesizing, using the parametric face model, an output face. The output face including the target face wherein the target facial expression is modified to imitate the source facial expression. The method includes generating, based on a deep neural network, mouth and eyes regions, and combining the output face, the mouth, and eyes regions to generate a frame of an output video.Type: ApplicationFiled: January 18, 2019Publication date: July 23, 2020Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
-
Publication number: 20180182434Abstract: Methods and systems for generating video previews are provided. In one embodiment, a method includes acquiring a video. The method allows extracting features of the video. The method further includes determining, based on the features, a genre of the video. The method can proceed with selecting, based on the features and the genre, a time fragment of the video. The method further includes cropping the time fragment to a rectangular shape to fit a screen of a mobile device positioned vertically. The method further includes compressing the cropped fragment into low bitrate video fragment.Type: ApplicationFiled: December 27, 2016Publication date: June 28, 2018Inventors: Aleksei Esin, Dmitry Matov, Grigorii Fefelov, Eugene Krokhalev
-
Publication number: 20170364492Abstract: A web content enrichment system can match an image to text of web content. When the text of web content includes a snippet, the image matched to the text enriches the snippet to enhance results of a search engine. When the text of web content includes text contained in a webpage, the image matched to this text enriches the webpage to enhance user perception and understanding of the webpage. The process of matching images to text involves extracting features of a plurality of images and features of a plurality of text documents, calculating scores of the images based on the extracted features, and selecting one image per text document based on the scores using a machine-learning algorithm. The result of the matching can be provided to a web content module for storing, incorporating into the result lists of the search engine, or delivery to a user.Type: ApplicationFiled: June 20, 2016Publication date: December 21, 2017Inventors: Philipp Pushnyakov, Eugene Krokhalev, Dmitry Matov