Patents by Inventor Victor Shaburov

Victor Shaburov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11089238
    Abstract: Disclosed are systems and methods for providing personalized videos featuring multiple persons. An example method includes receiving a user selection of a video having at least one frame with at least a target face and at least one further target face and receiving an image of a source face and a further image of a further source face. The method further includes modifying the image of the source face to generate an image of a modified source face and modifying the further image of the further source face to generate an image of a modified further source face. In the at least one frame of the video, the target face is replaced with the image of modified source face and the at least one further face is replaced with the modified further source face to generate a personalized video. The personalized video is sent to at least one further user.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: August 10, 2021
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Alexander Mashrabov, Grigoriy Tkachenko, Ivan Semenov
  • Publication number: 20210192193
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Application
    Filed: February 10, 2021
    Publication date: June 24, 2021
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 11044217
    Abstract: Systems and methods are provided for receiving a first media content item associated with a first interactive object of an interactive message, receiving a second media content item associated with a second interactive object of the interactive message, generating a third media content item based on the first media content item and second media content item, wherein the third media content item comprises combined features of the first media content item and the second media content item, and causing display of the generated third media content item.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: June 22, 2021
    Assignee: Snap Inc.
    Inventors: Grygoriy Kozhemiak, Oleksandr Pyshchenko, Victor Shaburov, Trevor Stephenson, Aleksei Stoliar
  • Patent number: 11037372
    Abstract: A context based augmented reality system can be used to display augmented reality elements over a live video feed on a client device. The augmented reality elements can be selected based on a number of context inputs generated by the client device. The context inputs can include location data of the client device and location data of nearby physical places that have preconfigured augmented elements. The preconfigured augmented elements can be preconfigured to exhibit a design scheme of the corresponding physical place.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: June 15, 2021
    Assignee: Snap Inc.
    Inventors: Ebony James Charlton, Jokubas Dargis, Eitan Pilipski, Dhritiman Sagar, Victor Shaburov
  • Publication number: 20210166732
    Abstract: A computer implemented method of triggering events in a video, the method comprising: providing a list of objects with their states and corresponding events in video such that each state from the list triggers at least one event of the corresponding events, wherein each object from the list has at least one state triggering at least one event of the corresponding events from the list in video; detecting at least one object from the list that at least partially and at least occasionally is presented in frames of the video; tracking the at least one object and its state; triggering at least one event of the corresponding events from the list in video in case the state of the at least one object matches with one of its states from the list.
    Type: Application
    Filed: February 9, 2021
    Publication date: June 3, 2021
    Inventors: Elena Shaburova, Victor Shaburov
  • Patent number: 10963679
    Abstract: Methods and systems for recognizing emotions in video are disclosed. One example method includes the steps of receiving a video including images, detecting a face of the individual in the images, mapping the detected face to a model including at least two separated points in space corresponding to detectable emotions, each of the at least two separated points in space representing a plurality of example faces corresponding to one of the detectable emotions, determining the emotion of the individual from the detectable emotions based on a proximity of the detected face to the at least two separated points in space.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: March 30, 2021
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 10950271
    Abstract: A computer implemented method of triggering events in a video, the method comprising: providing a list of objects with their states and corresponding events in video such that each state from the list triggers at least one event of the corresponding events, wherein each object from the list has at least one state triggering at least one event of the corresponding events from the list in video; detecting at least one object from the list that at least partially and at least occasionally is presented in frames of the video; tracking the at least one object and its state; triggering at least one event of the corresponding events from the list in video in case the state of the at least one object matches with one of its states from the list.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: March 16, 2021
    Assignee: Snap Inc.
    Inventors: Elena Shaburova, Victor Shaburov
  • Patent number: 10949655
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: March 16, 2021
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 10943371
    Abstract: A system for customizing soundtracks and hairstyles in modifiable videos of a multimedia messaging application (MMA) is provided. In one example embodiment, the system includes a processor and a memory storing processor-executable codes, wherein the processor is configured to receive, via the MMA, a modifiable video and an image of a user including an image of a face and an image of hair; determine that the image of hair is modifiable; modify the image of hair and generate a further image including the modified image of hair and the image of the face; generate, based on the further image and the modifiable video, a personalized video, wherein the personalized video includes a part of the further image of the user and a part of the modifiable video, and add a soundtrack to the personalized video based on predetermined criteria.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: March 9, 2021
    Assignee: Snap Inc.
    Inventors: Jeremy Voss, Victor Shaburov, Ivan Semenov, Diana Maximova, Alina Berezhko
  • Publication number: 20210014183
    Abstract: Systems and methods are provided for sending serialized data for an interactive message comprising a first session data item to a second computing device to render the interactive message using the first session data item and display the rendered interactive message comprising a first media content item associated with a first interactive object and receiving, from the second computing device, a second media content item associated with a second interactive object of the interactive message. The systems and methods further provided for generating a second session data item for the second interactive object of the interactive message, adding the second session data item to the serialized data, and sending the serialized data to a third computing device to render the interactive message using the serialized data and display the rendered interactive message comprising the first media content item and the second media content item.
    Type: Application
    Filed: September 28, 2020
    Publication date: January 14, 2021
    Inventors: Grygoriy Kozhemiak, Victor Shaburov, Trevor Stephenson
  • Publication number: 20200410227
    Abstract: Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by receiving one or more images depicting an object of interest and determining pixels within the one or more images corresponding to the object of interest. The systems, devices, and methods identify a position of a portion of the object of interest and determine a direction for the portion of the object of interest. Based on the direction of the portion of the object of interest, a histogram threshold is dynamically modified for identifying pixels as corresponding to the portion of the object of interest. The portion of the object of interest is replaced with a graphical interface element aligned with the direction of the portion of the object of interest.
    Type: Application
    Filed: September 16, 2020
    Publication date: December 31, 2020
    Inventors: Victor Shaburov, Yurii Monastyrshyn, Oleksandr Pyshchenko, Sergei Kotcur
  • Patent number: 10834040
    Abstract: Systems and methods are provided for sending serialized data for an interactive message comprising a first session data item to a second computing device to render the interactive message using the first session data item and display the rendered interactive message comprising a first media content item associated with a first interactive object and receiving, from the second computing device, a second media content item associated with a second interactive object of the interactive message. The systems and methods further provided for generating a second session data item for the second interactive object of the interactive message, adding the second session data item to the serialized data, and sending the serialized data to a third computing device to render the interactive message using the serialized data and display the rendered interactive message comprising the first media content item and the second media content item.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: November 10, 2020
    Assignee: Snap Inc.
    Inventors: Grygoriy Kozhemiak, Victor Shaburov, Trevor Stephenson
  • Patent number: 10810418
    Abstract: Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by receiving one or more images depicting an object of interest and determining pixels within the one or more images corresponding to the object of interest. The systems, devices, and methods identify a position of a portion of the object of interest and determine a direction for the portion of the object of interest. Based on the direction of the portion of the object of interest, a histogram threshold is dynamically modified for identifying pixels as corresponding to the portion of the object of interest. The portion of the object of interest is replaced with a graphical interface element aligned with the direction of the portion of the object of interest.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: October 20, 2020
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn, Oleksandr Pyshchenko, Sergei Kotcur
  • Publication number: 20200258307
    Abstract: A context based augmented reality system can be used to display augmented reality elements over a live video feed on a client device. The augmented reality elements can be selected based on a number of context inputs generated by the client device. The context inputs can include location data of the client device and location data of nearby physical places that have preconfigured augmented elements. The preconfigured augmented elements can be preconfigured to exhibit a design scheme of the corresponding physical place.
    Type: Application
    Filed: January 16, 2020
    Publication date: August 13, 2020
    Inventors: Ebony James Charlton, Jokubas Dargis, Eitan Pilipski, Dhritiman Sagar, Victor Shaburov
  • Publication number: 20200236301
    Abstract: Disclosed are systems and methods for providing personalized videos featuring multiple persons. An example method includes receiving a user selection of a video having at least one frame with at least a target face and at least one further target face and receiving an image of a source face and a further image of a further source face. The method further includes modifying the image of the source face to generate an image of a modified source face and modifying the further image of the further source face to generate an image of a modified further source face. In the at least one frame of the video, the target face is replaced with the image of modified source face and the at least one further face is replaced with the modified further source face to generate a personalized video. The personalized video is sent to at least one further user.
    Type: Application
    Filed: October 7, 2019
    Publication date: July 23, 2020
    Inventors: Victor Shaburov, Alexander Mashrabov, Grigoriy Tkachenko, Ivan Semenov
  • Publication number: 20200236297
    Abstract: Disclosed are systems and methods for providing personalized videos. An example method includes storing one or more preprocessed videos. The one or more preprocessed videos may include at least one frame with at least a target face. The method may continue with receiving an image of a source face, for example, by receiving a user selection of a further image and segmenting the further image into portions including the source face and a background. The method may then proceed with modifying the one or more preprocessed videos to generate one or more personalized videos. The modification may include modifying the image of the source face to generate an image of a modified source face. The modified source face may adopt a facial expression of the target face. The modification may further include replacing the at least one target face with the image of the modified source face.
    Type: Application
    Filed: October 7, 2019
    Publication date: July 23, 2020
    Inventors: Victor Shaburov, Alexander Mashrabov, Grigoriy Tkachenko, Ivan Semenov
  • Publication number: 20200234508
    Abstract: Disclosed are systems and methods for template-based generation of personalized videos. An example method may commence with receiving video configuration data including a sequence of frame images, a sequence of face area parameters defining positions of a face area in the frame images, and a sequence of facial landmark parameters defining positions of facial landmarks in the frame images. The method may continue with receiving an image of a source face. The method may further include generating an output video. The generation of the output video may include modifying a frame image of the sequence of frame images. Specifically, the image of the source face may be modified to obtain a further image featuring the source face adopting a facial expression corresponding to the facial landmark parameters. The further image may be inserted into the frame image at a position determined by face area parameters corresponding to the frame image.
    Type: Application
    Filed: October 23, 2019
    Publication date: July 23, 2020
    Inventors: Victor Shaburov, Alexander Mashrabov, Dmitriy Matov, Sofia Savinova, Alexey Pchelnikov, Roman Golobkov
  • Publication number: 20200234483
    Abstract: Described are systems and methods for generating personalized videos with customized text messages. An example method may commence with receiving a video template. The video template may include a sequence of frame images and preset text parameters defining an animation of a text. The method may continue with generating a configuration file based on the text and the preset text parameters. The configuration file may include text parameters defining rendering the text for each of the frame images. The method may further include receiving an input text and rendering an output video comprising the sequence of frame images featuring the input text rendered according to the text parameters. The rendering may be performed based on the configuration file. The method may continue with sending the output video to a further computing device via a communication chat.
    Type: Application
    Filed: October 23, 2019
    Publication date: July 23, 2020
    Inventors: Alexander Mashrabov, Victor Shaburov, Sofia Savinova, Dmitriy Matov, Andrew Osipov, Ivan Semenov, Roman Golobkov
  • Publication number: 20200106729
    Abstract: Systems and methods are provided for receiving a first media content item associated with a first interactive object of an interactive message, receiving a second media content item associated with a second interactive object of the interactive message, generating a third media content item based on the first media content item and second media content item, wherein the third media content item comprises combined features of the first media content item and the second media content item, and causing display of the generated third media content item.
    Type: Application
    Filed: December 4, 2019
    Publication date: April 2, 2020
    Inventors: Grygoriy Kozhemiak, Oleksandr Pyshchenko, Victor Shaburov, Trevor Stephenson, Aleksei Stoliar
  • Patent number: 10599917
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: March 24, 2020
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn