Patents by Inventor Roman Golobokov
Roman Golobokov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240153227Abstract: The technical problem of creating an augmented reality (AR) experience that, on one hand, is accessible from a camera view user interface provided with a messaging client and that, also, can perform a modification based on a previously captured image of a user, is addressed by providing an AR component. When a user, while accessing the messaging client, engages a user selectable element representing the AR component in the camera view user interface, the messaging system loads the AR component in the messaging client. The AR component comprises a target media content object, which can be animation or live action video. The loaded AR component accesses a portrait image associated with a user and modifies the target media content using the portrait image. The resulting target media content object is displayed in the camera view user interface.Type: ApplicationFiled: January 4, 2024Publication date: May 9, 2024Inventors: Roman Golobokov, Aleksandr Mashrabov, Dmitry Matov, Jeremy Baker Voss
-
Publication number: 20240104954Abstract: The subject technology captures first image data by a computing device, the first image data comprising a target face of a target actor and facial expressions of the target actor, the facial expressions including lip movements. The subject technology generates, based at least in part on frames of a source media content, sets of source pose parameters. The subject technology receives a selection of a particular facial expression from a set of facial expressions. The subject technology generates, based at least in part on sets of source pose parameters and the selection of the particular facial expression, an output media content. The subject technology provides augmented reality content based at least in part on the output media content for display on the computing device.Type: ApplicationFiled: December 8, 2023Publication date: March 28, 2024Inventors: Roman Golobokov, Alexandr Marinenko, Aleksandr Mashrabov, Aleksei Bromot, Grigoriy Tkachenko
-
Publication number: 20240089364Abstract: Provided are systems and methods for customizing modifiable videos. An example method includes analyzing recent messages associated with a user in a multimedia messaging application to determine a context of the recent messages, determining, based on the context, a property of a modifiable feature, selecting, based on the context, a list of relevant modifiable videos from a database configured to store modifiable videos associated with a preset modifiable feature, replacing a property of the preset modifiable feature in relevant modifiable videos of the list of relevant modifiable videos with the property of the modifiable feature, and rendering the list of relevant modifiable videos for viewing by the user, where the rendering includes displaying the modifiable feature in the relevant modifiable videos.Type: ApplicationFiled: November 15, 2023Publication date: March 14, 2024Inventors: Jeremy Voss, Victor Shaburov, Ivan Babanin, Aleksandr Mashrabov, Roman Golobokov
-
Publication number: 20240087204Abstract: Described are systems and methods for generating personalized videos with customized text messages. An example method includes receiving an input text, a video template including a sequence of frame images, and at least one parameter for animation of the input text across the sequence of frame images, generating, based on the input text and the at least one parameter for animation, a configuration file including a text style for the input text for a frame in the sequence of frame images, and rendering, based on the configuration file, an output frame of an output video, where the output frame includes the frame in the sequence of frame images and a layer, and where the layer includes the input text stylized based on the text style. The method further includes providing an option enabling a user to change the at least one parameter for animation.Type: ApplicationFiled: November 15, 2023Publication date: March 14, 2024Inventors: Alexander Mashrabov, Victor Shaburov, Sofia Savinova, Dmitriy Matov, Andrew Osipov, Ivan Semenov, Roman Golobokov
-
Patent number: 11895260Abstract: A system for customizing modifiable videos of a multimedia messaging application (MMA) is provided. In one example embodiment, the system includes at least one processor and a memory storing processor-executable codes, wherein the at least one processor is configured to analyze recent messages of a user to determine a context of the recent messages; determine, based on the context, a customized feature; select, based on the context, a list of relevant modifiable videos from a database configured to store modifiable videos, the modifiable videos being associated with a preset modifiable feature; replace the preset modifiable feature in the relevant modifiable videos with the customized feature; and render a modifiable video from the list of relevant modifiable videos for viewing by the user, the rendering including displaying the customized feature in the relevant modifiable videos.Type: GrantFiled: November 10, 2021Date of Patent: February 6, 2024Assignee: Snap Inc.Inventors: Jeremy Voss, Victor Shaburov, Ivan Babanin, Aleksandr Mashrabov, Roman Golobokov
-
Patent number: 11875600Abstract: The subject technology captures first image data by a computing device, the first image data comprising a target face of a target actor and facial expressions of the target actor, the facial expressions including lip movements. The subject technology generates, based at least in part on frames of a source media content, sets of source pose parameters. The subject technology receives a selection of a particular facial expression from a set of facial expressions. The subject technology generates, based at least in part on sets of source pose parameters and the selection of the particular facial expression, an output media content. The subject technology provides augmented reality content based at least in part on the output media content for display on the computing device.Type: GrantFiled: March 29, 2022Date of Patent: January 16, 2024Assignee: Snap Inc.Inventors: Roman Golobokov, Alexandr Marinenko, Aleksandr Mashrabov, Aleksei Bromot, Grigoriy Tkachenko
-
Patent number: 11869164Abstract: The technical problem of creating an augmented reality (AR) experience that, on one hand, is accessible from a camera view user interface provided with a messaging client and that, also, can perform a modification based on a previously captured image of a user, is addressed by providing an AR component. When a user, while accessing the messaging client, engages a user selectable element representing the AR component in the camera view user interface, the messaging system loads the AR component in the messaging client. The AR component comprises a target media content object, which can be animation or live action video. The loaded AR component accesses a portrait image associated with a user and modifies the target media content using the portrait image. The resulting target media content object is displayed in the camera view user interface.Type: GrantFiled: May 25, 2022Date of Patent: January 9, 2024Assignee: Snap Inc.Inventors: Roman Golobokov, Aleksandr Mashrabov, Dmitry Matov, Jeremy Baker Voss
-
Patent number: 11842433Abstract: Described are systems and methods for generating personalized videos with customized text messages. An example method commences with receiving an input text and a video template. The video template includes a sequence of frame images and text parameters defining an animation of the input text for the sequence of frame images. The method continues with rendering an output video. The output video includes the sequence of frame images featuring the input text rendered according to the text parameters. The method further includes providing a user with an option to change at least one text parameter of the text parameters. The method continues with dynamically changing, by the at least one computing resource, the input text according to the at least one text parameter. The method further includes providing, by the at least one computing resource, the output video to at least one further computing resource via a communication chat.Type: GrantFiled: March 17, 2022Date of Patent: December 12, 2023Assignee: Snap Inc.Inventors: Alexander Mashrabov, Victor Shaburov, Sofia Savinova, Dmitriy Matov, Andrew Osipov, Ivan Semenov, Roman Golobokov
-
Publication number: 20230290098Abstract: Disclosed are systems and methods for template-based generation of personalized videos. An example method includes receiving a sequence of frame images, face area parameters corresponding to positions of a face area in a frame image of the sequence of frame images, and facial landmark parameters corresponding to the frame image of the sequence of frame images, receiving an image of a source face, modifying, based on the facial landmark parameters corresponding to the frame image, the image of the source face to obtain a further face image featuring the source face adopting a facial expression corresponding to the facial landmark parameters, and inserting the further face image into the frame image at a position determined by the face area parameters corresponding to the frame image, thereby generating an output frame of an output video.Type: ApplicationFiled: May 22, 2023Publication date: September 14, 2023Inventors: Victor Shaburov, Alexander Mashrabov, Dmitriy Matov, Sofia Savinova, Alexey Pchelnikov, Roman Golobokov
-
Patent number: 11694417Abstract: Disclosed are systems and methods for template-based generation of personalized videos. An example method may commence with receiving video configuration data including a sequence of frame images, a sequence of face area parameters defining positions of a face area in the frame images, and a sequence of skin masks defining positions of a skin area of a part of the at least one body in the frame images. The method may continue with receiving an image of a source face. The method may further include determining color data associated with the source face. The method may include recoloring the skin area of the part of the at least one body in the frame image and inserting the image of the source face into the frame image at a position determined by face area parameters corresponding to the frame image to generate an output frame of an output video.Type: GrantFiled: February 18, 2022Date of Patent: July 4, 2023Assignee: Snap Inc.Inventors: Victor Shaburov, Alexander Mashrabov, Dmitriy Matov, Sofia Savinova, Alexey Pchelnikov, Roman Golobokov
-
Publication number: 20220319230Abstract: The subject technology captures first image data by a computing device, the first image data comprising a target face of a target actor and facial expressions of the target actor, the facial expressions including lip movements. The subject technology generates, based at least in part on frames of a source media content, sets of source pose parameters. The subject technology receives a selection of a particular facial expression from a set of facial expressions. The subject technology generates, based at least in part on sets of source pose parameters and the selection of the particular facial expression, an output media content. The subject technology provides augmented reality content based at least in part on the output media content for display on the computing device.Type: ApplicationFiled: March 29, 2022Publication date: October 6, 2022Inventors: Roman Golobokov, Alexandr Marinenko, Aleksandr Mashrabov
-
Publication number: 20220321804Abstract: The subject technology receives at least one signal from a computing device, the at least one signal comprising at least one of a current time, battery power, sensor information, or location information. The subject technology generates a digital sticker, the digital sticker including graphical content indicating information based at least in part based on the at least one signal and media content including an image of a target face, the image of the target face being modified based on at least one of sets of source pose parameters to mimic at least one of positions of a head of a source actor and at least one of facial expressions of the source actor. The subject technology provides augmented reality content for display on a computing device, the augmented reality content including the digital sticker as an overlay on at least a portion of the augmented reality content.Type: ApplicationFiled: March 28, 2022Publication date: October 6, 2022Inventors: Nikita Demidov, Roman Golobokov, Alina Melnyk, Jeremy Baker Voss
-
Publication number: 20220292794Abstract: The technical problem of creating an augmented reality (AR) experience that, on one hand, is accessible from a camera view user interface provided with a messaging client and that, also, can perform a modification based on a previously captured image of a user, is addressed by providing an AR component. When a user, while accessing the messaging client, engages a user selectable element representing the AR component in the camera view user interface, the messaging system loads the AR component in the messaging client. The AR component comprises a target media content object, which can be animation or live action video. The loaded AR component accesses a portrait image associated with a user and modifies the target media content using the portrait image. The resulting target media content object is displayed in the camera view user interface.Type: ApplicationFiled: May 25, 2022Publication date: September 15, 2022Inventors: Roman Golobokov, Aleksandr Mashrabov, Dmitry Matov, Jeremy Baker Voss
-
Publication number: 20220207812Abstract: Described are systems and methods for generating personalized videos with customized text messages. An example method commences with receiving an input text and a video template. The video template includes a sequence of frame images and text parameters defining an animation of the input text for the sequence of frame images. The method continues with rendering an output video. The output video includes the sequence of frame images featuring the input text rendered according to the text parameters. The method further includes providing a user with an option to change at least one text parameter of the text parameters. The method continues with dynamically changing, by the at least one computing resource, the input text according to the at least one text parameter. The method further includes providing, by the at least one computing resource, the output video to at least one further computing resource via a communication chat.Type: ApplicationFiled: March 17, 2022Publication date: June 30, 2022Inventors: Alexander Mashrabov, Victor Shaburov, Sofia Savinova, Dmitriy Matov, Andrew Osipov, Ivan Semenov, Roman Golobokov
-
Patent number: 11354872Abstract: The technical problem of creating an augmented reality (AR) experience that, on one hand, is accessible from a camera view user interface provided with a messaging client and that, also, can perform a modification based on a previously captured image of a user, is addressed by providing an AR component. When a user, while accessing the messaging client, engages a user selectable element representing the AR component in the camera view user interface, the messaging system loads the AR component in the messaging client. The AR component comprises a target media content object, which can be animation or live action video. The loaded AR component accesses a portrait image associated with a user and modifies the target media content using the portrait image. The resulting target media content object is displayed in the camera view user interface.Type: GrantFiled: November 11, 2020Date of Patent: June 7, 2022Assignee: Snap Inc.Inventors: Roman Golobokov, Aleksandr Mashrabov, Dmitry Matov, Jeremy Baker Voss
-
Publication number: 20220172449Abstract: Disclosed are systems and methods for template-based generation of personalized videos. An example method may commence with receiving video configuration data including a sequence of frame images, a sequence of face area parameters defining positions of a face area in the frame images, and a sequence of skin masks defining positions of a skin area of a part of the at least one body in the frame images. The method may continue with receiving an image of a source face. The method may further include determining color data associated with the source face. The method may include recoloring the skin area of the part of the at least one body in the frame image and inserting the image of the source face into the frame image at a position determined by face area parameters corresponding to the frame image to generate an output frame of an output video.Type: ApplicationFiled: February 18, 2022Publication date: June 2, 2022Inventors: Victor Shaburov, Alexander Mashrabov, Dmitriy Matov, Sofia Savinova, Alexey Pchelnikov, Roman Golobokov
-
Publication number: 20220148276Abstract: The technical problem of creating an augmented reality (AR) experience that, on one hand, is accessible from a camera view user interface provided with a messaging client and that, also, can perform a modification based on a previously captured image of a user, is addressed by providing an AR component. When a user, while accessing the messaging client, engages a user selectable element representing the AR component in the camera view user interface, the messaging system loads the AR component in the messaging client. The AR component comprises a target media content object, which can be animation or live action video. The loaded AR component accesses a portrait image associated with a user and modifies the target media content using the portrait image. The resulting target media content object is displayed in the camera view user interface.Type: ApplicationFiled: November 11, 2020Publication date: May 12, 2022Inventors: Roman Golobokov, Aleksandr Mashrabov, Dmitry Matov, Jeremy Baker Voss
-
Patent number: 11308677Abstract: Described are systems and methods for generating personalized videos with customized text messages. An example method may commence with receiving a video template. The video template may include a sequence of frame images and preset text parameters defining an animation of a text. The method may continue with generating a configuration file based on the text and the preset text parameters. The configuration file may include text parameters defining rendering the text for each of the frame images. The method may further include receiving an input text and rendering an output video comprising the sequence of frame images featuring the input text rendered according to the text parameters. The rendering may be performed based on the configuration file. The method may continue with sending the output video to a further computing device via a communication chat.Type: GrantFiled: October 23, 2019Date of Patent: April 19, 2022Assignee: Snap Inc.Inventors: Alexander Mashrabov, Victor Shaburov, Sofia Savinova, Dmitriy Matov, Andrew Osipov, Ivan Semenov, Roman Golobokov
-
Publication number: 20220100534Abstract: A preview personalization system for generating and presenting previews of personalized media content at a client device, wherein the previews may be personalized, in real-time, based on one or more attributes which may include user profile data and contextual data accessed by the client device, according to certain example embodiments.Type: ApplicationFiled: August 11, 2021Publication date: March 31, 2022Inventors: Roman Golobokov, Sergei Vasilenko
-
Patent number: 11288880Abstract: Disclosed are systems and methods for template-based generation of personalized videos. An example method may commence with receiving video configuration data including a sequence of frame images, a sequence of face area parameters defining positions of a face area in the frame images, and a sequence of facial landmark parameters defining positions of facial landmarks in the frame images. The method may continue with receiving an image of a source face. The method may further include generating an output video. The generation of the output video may include modifying a frame image of the sequence of frame images. Specifically, the image of the source face may be modified to obtain a further image featuring the source face adopting a facial expression corresponding to the facial landmark parameters. The further image may be inserted into the frame image at a position determined by face area parameters corresponding to the frame image.Type: GrantFiled: October 23, 2019Date of Patent: March 29, 2022Assignee: Snap Inc.Inventors: Victor Shaburov, Alexander Mashrabov, Dmitriy Matov, Sofia Savinova, Alexey Pchelnikov, Roman Golobokov