Patents by Inventor Aleksandr Mashrabov
Aleksandr Mashrabov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20210303622Abstract: A system for searching and ranking modifiable videos in a multimedia messaging application (MMA) is provided. In one example embodiment, the system includes a database configured to store modifiable videos, the modifiable videos being associated with text messages and rankings, a processor, and a memory storing processor-executable codes, wherein the processor is configured to implement the following operations upon executing the processor-executable codes: receiving, via the MMA, an input of a user; selecting, based on the input, a list of relevant modifiable videos from the database; rendering, via the MMA, the list of relevant modifiable videos for viewing by the user; determining that the user has shared, via the MMA, a modifiable video from the list; storing an information concerning the list and the shared modifiable video into a statistical log; and updating, based on the information in the statistical log, the rankings of the modifiable videos in the database.Type: ApplicationFiled: March 31, 2020Publication date: September 30, 2021Inventors: Jeremy Voss, Victor Shaburov, Aleksandr Mashrabov, Dmitriy Matov, Hanna Rulevska, Dmytro Ishchenko
-
Patent number: 11114086Abstract: Provided are systems and methods for text and audio-based real-time face reenactment. An example method includes receiving an input text and a target image, the target image including a target face; generating, based on the input text, a sequence of sets of acoustic features representing the input text; determining, based on the sequence of sets of acoustic features, a sequence of sets of scenario data indicating modifications of the target face for pronouncing the input text; generating, based on the sequence of sets of scenario data, a sequence of frames, wherein each of the frames includes the target face modified based on at least one of the sets of scenario data; generating, based on the sequence of frames, an output video; and synthesizing, based on the sequence of sets of acoustic features, an audio data and adding the audio data to the output video.Type: GrantFiled: July 11, 2019Date of Patent: September 7, 2021Assignee: Snap Inc.Inventors: Pavel Savchenkov, Maxim Lukin, Aleksandr Mashrabov
-
Patent number: 11049310Abstract: Provided are systems and methods for photorealistic real-time portrait animation. An example method includes receiving a scenario video with at least one input frame. The input frame includes a first face. The method further includes receiving a target image with a second face. The method further includes determining, based on the at least one input frame and the target image, two-dimensional (2D) deformations, wherein the 2D deformations, when applied to the second face, modify the second face to imitate at least a facial expression and a head orientation of the first face. The method further includes applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video.Type: GrantFiled: January 18, 2019Date of Patent: June 29, 2021Assignee: Snap Inc.Inventors: Eugene Krokhalev, Aleksandr Mashrabov, Pavel Savchenkov
-
Publication number: 20210019929Abstract: Provided are systems and methods for single image-based body animation. An example method includes receiving an input image that includes a body of a person and segmenting the input image into a body portion and a background portion. The method further includes fitting a model to the body portion. The model is configured to receive a set of pose parameters representing a pose of the body and generate an output image including an image of the body adopting the pose. The method further includes receiving a series of further sets of pose parameters, each representing at least one of further poses of the body. The further sets of pose parameters are generated using a generic model. The method also includes generating a series of output images of the body adopting the further poses and generating an output video based on the series of output images.Type: ApplicationFiled: October 2, 2020Publication date: January 21, 2021Inventors: Egor Nemchinov, Sergei Gorbatyuk, Aleksandr Mashrabov, Egor Spirin, Iaroslav Sokolov, Andrei Smirdin, Igor Tukh
-
Publication number: 20210012090Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a scenario including a series of source facial expressions, determining, based on the target face, one or more target facial expressions, and synthesizing, using the parametric face model, an output face. The output face includes the target face. The one or more target facial expressions are modified to imitate the source facial expressions. The method further includes generating, based on a deep neural network, a mouth region and an eyes region, and combining the output face, the mouth region, and the eyes region to generate a frame of an output video.Type: ApplicationFiled: September 28, 2020Publication date: January 14, 2021Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
-
Publication number: 20200410735Abstract: Provided are systems and methods for animating a single image of a human body and applying effects. An example method includes providing a database of motions; receiving an input image including a body of a person; receiving a user input including a motion selected from the database of motions; segmenting the input image into a body portion and a background portion; fitting the body portion to a hair model; generating, based on the body portion and the selected motion, a video featuring the body of the person repeating the selected motion; where generating the video includes detecting positions of key points associated with a head of the person in a frame of the video, generating an image of hair of the person based on the positions of the key points and the hair model, and inserting the image of the hair in the frame; and displaying the generated video.Type: ApplicationFiled: September 14, 2020Publication date: December 31, 2020Inventors: Sergei Gorbatyuk, Nikolai Smirnov, Aleksandr Mashrabov, Egor Nemchinov
-
Publication number: 20200388064Abstract: Provided are systems and methods for single image-based body animation. An example method includes receiving a input image, the input image including a body of a person, segmenting the input image into a body portion and a background portion, wherein the body portion includes pixels corresponding to the body of the person, fitting a model to the body portion, wherein the model is configured to receive pose parameters representing a pose of the body and generate an output image including an image of the body adopting the pose, receiving a series of further pose parameters, each of the series of further pose parameters representing one of further poses of the body, providing each of the series of further pose parameters to the model to generate a series of output images of the body adopting the further poses, and generating, based on the series of output images, an output video.Type: ApplicationFiled: June 7, 2019Publication date: December 10, 2020Inventors: Egor Nemchinov, Sergei Gorbatyuk, Aleksandr Mashrabov, Egor Spirin, Iaroslav Sokolov, Andrei Smirdin, Igor Tukh
-
Patent number: 10839586Abstract: Provided are systems and methods for single image-based body animation. An example method includes receiving a input image, the input image including a body of a person, segmenting the input image into a body portion and a background portion, wherein the body portion includes pixels corresponding to the body of the person, fitting a model to the body portion, wherein the model is configured to receive pose parameters representing a pose of the body and generate an output image including an image of the body adopting the pose, receiving a series of further pose parameters, each of the series of further pose parameters representing one of further poses of the body, providing each of the series of further pose parameters to the model to generate a series of output images of the body adopting the further poses, and generating, based on the series of output images, an output video.Type: GrantFiled: June 7, 2019Date of Patent: November 17, 2020Assignee: Snap Inc.Inventors: Egor Nemchinov, Sergei Gorbatyuk, Aleksandr Mashrabov, Egor Spirin, Iaroslav Sokolov, Andrei Smirdin, Igor Tukh
-
Patent number: 10789453Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a source video including a source face. The method includes determining, based on the target face, a target facial expression. The method includes determining, based on the source face, a source facial expression. The method includes synthesizing, using the parametric face model, an output face. The output face including the target face wherein the target facial expression is modified to imitate the source facial expression. The method includes generating, based on a deep neural network, mouth and eyes regions, and combining the output face, the mouth, and eyes regions to generate a frame of an output video.Type: GrantFiled: January 18, 2019Date of Patent: September 29, 2020Assignee: Snap Inc.Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
-
Patent number: 10776981Abstract: Provided are systems and methods for animating a single image of a human body and applying effects. An example method includes providing, by a computer device, a database of motions; receiving, by a computing device, an input image, the input image including a body of a person; receiving, by the computing device, a user input including a motion selected from the database of motions; segmenting, by the computing device, the input image into a body portion and a background portion; generating, by the computing device and based on the body portion and the selected motion, a video featuring the body of the person repeating the selected motion; and displaying, by the computing device, the generated video; receiving, by a computer device, a further user input including clothes, scene, illumination effect, and additional objects; and, while generating the video, modifying frames of the video based on the further user input.Type: GrantFiled: August 27, 2019Date of Patent: September 15, 2020Assignee: Snap Inc.Inventors: Sergei Gorbatyuk, Nikolai Smirnov, Aleksandr Mashrabov, Egor Nemchinov
-
Publication number: 20200234034Abstract: Provided are systems and a method for photorealistic real-time face reenactment. An example method includes receiving a target video including a target face and a source video including a source face. The method includes determining, based on the target face, a target facial expression. The method includes determining, based on the source face, a source facial expression. The method includes synthesizing, using the parametric face model, an output face. The output face including the target face wherein the target facial expression is modified to imitate the source facial expression. The method includes generating, based on a deep neural network, mouth and eyes regions, and combining the output face, the mouth, and eyes regions to generate a frame of an output video.Type: ApplicationFiled: January 18, 2019Publication date: July 23, 2020Inventors: Pavel Savchenkov, Dmitry Matov, Aleksandr Mashrabov, Alexey Pchelnikov
-
Publication number: 20200234482Abstract: Provided are systems and methods for photorealistic real-time portrait animation. An example method includes receiving a scenario video with at least one input frame. The input frame includes a first face. The method further includes receiving a target image with a second face. The method further includes determining, based on the at least one input frame and the target image, two-dimensional (2D) deformations, wherein the 2D deformations, when applied to the second face, modify the second face to imitate at least a facial expression and a head orientation of the first face. The method further includes applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video.Type: ApplicationFiled: January 18, 2019Publication date: July 23, 2020Inventors: Eugene Krokhalev, Aleksandr Mashrabov, Pavel Savchenkov
-
Publication number: 20200234690Abstract: Provided are systems and methods for text and audio-based real-time face reenactment. An example method includes receiving an input text and a target image, the target image including a target face; generating, based on the input text, a sequence of sets of acoustic features representing the input text; determining, based on the sequence of sets of acoustic features, a sequence of sets of scenario data indicating modifications of the target face for pronouncing the input text; generating, based on the sequence of sets of scenario data, a sequence of frames, wherein each of the frames includes the target face modified based on at least one of the sets of scenario data; generating, based on the sequence of frames, an output video; and synthesizing, based on the sequence of sets of acoustic features, an audio data and adding the audio data to the output video.Type: ApplicationFiled: July 11, 2019Publication date: July 23, 2020Inventors: Pavel Savchenkov, Maxim Lukin, Aleksandr Mashrabov
-
Publication number: 20200234480Abstract: Provided are systems and methods for realistic head turns and face animation synthesis. An example method may include receiving frames of a source video with the head and the face of a source actor. The method may then proceed with generating sets of source pose parameters that represent positions of the head and facial expressions of the source actor. The method may further include receiving at least one target image including the target head and the target face of a target person, determining target identity information associated with the target face, and generating an output video based on the target identity information and the sets of source pose parameters. Each frame of the output video can include an image of the target face modified to mimic at least one of the positions of the head of the source actor and at least one of facial expressions of the source actor.Type: ApplicationFiled: October 24, 2019Publication date: July 23, 2020Inventors: Yurii Volkov, Pavel Savchenkov, Maxim Lukin, Ivan Belonogov, Nikolai Smirnov, Aleksandr Mashrabov