Patents by Inventor Victor Shaburov

Victor Shaburov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200410227
    Abstract: Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by receiving one or more images depicting an object of interest and determining pixels within the one or more images corresponding to the object of interest. The systems, devices, and methods identify a position of a portion of the object of interest and determine a direction for the portion of the object of interest. Based on the direction of the portion of the object of interest, a histogram threshold is dynamically modified for identifying pixels as corresponding to the portion of the object of interest. The portion of the object of interest is replaced with a graphical interface element aligned with the direction of the portion of the object of interest.
    Type: Application
    Filed: September 16, 2020
    Publication date: December 31, 2020
    Inventors: Victor Shaburov, Yurii Monastyrshyn, Oleksandr Pyshchenko, Sergei Kotcur
  • Patent number: 10834040
    Abstract: Systems and methods are provided for sending serialized data for an interactive message comprising a first session data item to a second computing device to render the interactive message using the first session data item and display the rendered interactive message comprising a first media content item associated with a first interactive object and receiving, from the second computing device, a second media content item associated with a second interactive object of the interactive message. The systems and methods further provided for generating a second session data item for the second interactive object of the interactive message, adding the second session data item to the serialized data, and sending the serialized data to a third computing device to render the interactive message using the serialized data and display the rendered interactive message comprising the first media content item and the second media content item.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: November 10, 2020
    Assignee: Snap Inc.
    Inventors: Grygoriy Kozhemiak, Victor Shaburov, Trevor Stephenson
  • Patent number: 10810418
    Abstract: Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by receiving one or more images depicting an object of interest and determining pixels within the one or more images corresponding to the object of interest. The systems, devices, and methods identify a position of a portion of the object of interest and determine a direction for the portion of the object of interest. Based on the direction of the portion of the object of interest, a histogram threshold is dynamically modified for identifying pixels as corresponding to the portion of the object of interest. The portion of the object of interest is replaced with a graphical interface element aligned with the direction of the portion of the object of interest.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: October 20, 2020
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn, Oleksandr Pyshchenko, Sergei Kotcur
  • Publication number: 20200258307
    Abstract: A context based augmented reality system can be used to display augmented reality elements over a live video feed on a client device. The augmented reality elements can be selected based on a number of context inputs generated by the client device. The context inputs can include location data of the client device and location data of nearby physical places that have preconfigured augmented elements. The preconfigured augmented elements can be preconfigured to exhibit a design scheme of the corresponding physical place.
    Type: Application
    Filed: January 16, 2020
    Publication date: August 13, 2020
    Inventors: Ebony James Charlton, Jokubas Dargis, Eitan Pilipski, Dhritiman Sagar, Victor Shaburov
  • Publication number: 20200236301
    Abstract: Disclosed are systems and methods for providing personalized videos featuring multiple persons. An example method includes receiving a user selection of a video having at least one frame with at least a target face and at least one further target face and receiving an image of a source face and a further image of a further source face. The method further includes modifying the image of the source face to generate an image of a modified source face and modifying the further image of the further source face to generate an image of a modified further source face. In the at least one frame of the video, the target face is replaced with the image of modified source face and the at least one further face is replaced with the modified further source face to generate a personalized video. The personalized video is sent to at least one further user.
    Type: Application
    Filed: October 7, 2019
    Publication date: July 23, 2020
    Inventors: Victor Shaburov, Alexander Mashrabov, Grigoriy Tkachenko, Ivan Semenov
  • Publication number: 20200234483
    Abstract: Described are systems and methods for generating personalized videos with customized text messages. An example method may commence with receiving a video template. The video template may include a sequence of frame images and preset text parameters defining an animation of a text. The method may continue with generating a configuration file based on the text and the preset text parameters. The configuration file may include text parameters defining rendering the text for each of the frame images. The method may further include receiving an input text and rendering an output video comprising the sequence of frame images featuring the input text rendered according to the text parameters. The rendering may be performed based on the configuration file. The method may continue with sending the output video to a further computing device via a communication chat.
    Type: Application
    Filed: October 23, 2019
    Publication date: July 23, 2020
    Inventors: Alexander Mashrabov, Victor Shaburov, Sofia Savinova, Dmitriy Matov, Andrew Osipov, Ivan Semenov, Roman Golobkov
  • Publication number: 20200234508
    Abstract: Disclosed are systems and methods for template-based generation of personalized videos. An example method may commence with receiving video configuration data including a sequence of frame images, a sequence of face area parameters defining positions of a face area in the frame images, and a sequence of facial landmark parameters defining positions of facial landmarks in the frame images. The method may continue with receiving an image of a source face. The method may further include generating an output video. The generation of the output video may include modifying a frame image of the sequence of frame images. Specifically, the image of the source face may be modified to obtain a further image featuring the source face adopting a facial expression corresponding to the facial landmark parameters. The further image may be inserted into the frame image at a position determined by face area parameters corresponding to the frame image.
    Type: Application
    Filed: October 23, 2019
    Publication date: July 23, 2020
    Inventors: Victor Shaburov, Alexander Mashrabov, Dmitriy Matov, Sofia Savinova, Alexey Pchelnikov, Roman Golobkov
  • Publication number: 20200236297
    Abstract: Disclosed are systems and methods for providing personalized videos. An example method includes storing one or more preprocessed videos. The one or more preprocessed videos may include at least one frame with at least a target face. The method may continue with receiving an image of a source face, for example, by receiving a user selection of a further image and segmenting the further image into portions including the source face and a background. The method may then proceed with modifying the one or more preprocessed videos to generate one or more personalized videos. The modification may include modifying the image of the source face to generate an image of a modified source face. The modified source face may adopt a facial expression of the target face. The modification may further include replacing the at least one target face with the image of the modified source face.
    Type: Application
    Filed: October 7, 2019
    Publication date: July 23, 2020
    Inventors: Victor Shaburov, Alexander Mashrabov, Grigoriy Tkachenko, Ivan Semenov
  • Publication number: 20200106729
    Abstract: Systems and methods are provided for receiving a first media content item associated with a first interactive object of an interactive message, receiving a second media content item associated with a second interactive object of the interactive message, generating a third media content item based on the first media content item and second media content item, wherein the third media content item comprises combined features of the first media content item and the second media content item, and causing display of the generated third media content item.
    Type: Application
    Filed: December 4, 2019
    Publication date: April 2, 2020
    Inventors: Grygoriy Kozhemiak, Oleksandr Pyshchenko, Victor Shaburov, Trevor Stephenson, Aleksei Stoliar
  • Patent number: 10599917
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: March 24, 2020
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 10567321
    Abstract: Systems and methods are provided for receiving a first media content item associated with a first interactive object of an interactive message, receiving a second media content item associated with a second interactive object of the interactive message, generating a third media content item based on the first media content item and second media content item, wherein the third media content item comprises combined features of the first media content item and the second media content item, and causing display of the generated third media content item.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: February 18, 2020
    Assignee: Snap Inc.
    Inventors: Grygoriy Kozhemiak, Oleksandr Pyshchenko, Victor Shaburov, Trevor Stephenson, Aleksei Stoliar
  • Patent number: 10565795
    Abstract: A context based augmented reality system can be used to display augmented reality elements over a live video feed on a client device. The augmented reality elements can be selected based on a number of context inputs generated by the client device. The context inputs can include location data of the client device and location data of nearby physical places that have preconfigured augmented elements. The preconfigured augmented elements can be preconfigured to exhibit a design scheme of the corresponding physical place.
    Type: Grant
    Filed: July 19, 2017
    Date of Patent: February 18, 2020
    Assignee: Snap Inc.
    Inventors: Ebony James Charlton, Jokubas Dargis, Eitan Pilipski, Dhritiman Sagar, Victor Shaburov
  • Publication number: 20200053034
    Abstract: Systems and methods are provided for sending serialized data for an interactive message comprising a first session data item to a second computing device to render the interactive message using the first session data item and display the rendered interactive message comprising a first media content item associated with a first interactive object and receiving, from the second computing device, a second media content item associated with a second interactive object of the interactive message. The systems and methods further provided for generating a second session data item for the second interactive object of the interactive message, adding the second session data item to the serialized data, and sending the serialized data to a third computing device to render the interactive message using the serialized data and display the rendered interactive message comprising the first media content item and the second media content item.
    Type: Application
    Filed: October 16, 2019
    Publication date: February 13, 2020
    Inventors: Grygoriy Kozhemiak, Victor Shaburov, Trevor Stephenson
  • Patent number: 10523606
    Abstract: Systems and methods are provided for sending serialized data for an interactive message comprising a first session data item to a second computing device to render the interactive message using the first session data item and display the rendered interactive message comprising a first media content item associated with a first interactive object and receiving, from the second computing device, a second media content item associated with a second interactive object of the interactive message. The systems and methods further provided for generating a second session data item for the second interactive object of the interactive message, adding the second session data item to the serialized data, and sending the serialized data to a third computing device to render the interactive message using the serialized data and display the rendered interactive message comprising the first media content item and the second media content item.
    Type: Grant
    Filed: January 2, 2018
    Date of Patent: December 31, 2019
    Assignee: Snap Inc.
    Inventors: Grygoriy Kozhemiak, Victor Shaburov, Trevor Stephenson
  • Patent number: 10496947
    Abstract: Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.
    Type: Grant
    Filed: August 28, 2017
    Date of Patent: December 3, 2019
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 10438631
    Abstract: A method for real time video processing for retouching an object in a video is presented The method includes providing an object in the video steam, where the object is at least partially and at least occasionally presented in frames of the video. The method sets a degree of retouching and generates a list of at least one element of the object selected based on a request of retouching and the degree of retouching. The method detects the at least one element of the object in the video and parameters of the at least one element and calculates new parameters of the at least one element according to the degree of retouching. Characteristic points are detected for each of the at least one element of the object and a mesh is generated based on the characteristic points for each of the at least one element of the object. The at least one element of the object in the video is tracked by aligning the mesh for each of the at least one element with a position of the corresponding each of the at least one element.
    Type: Grant
    Filed: June 25, 2014
    Date of Patent: October 8, 2019
    Assignee: Snap Inc.
    Inventors: Elena Shaburova, Victor Shaburov
  • Publication number: 20190207884
    Abstract: Systems and methods are provided for sending serialized data for an interactive message comprising a first session data item to a second computing device to render the interactive message using the first session data item and display the rendered interactive message comprising a first media content item associated with a first interactive object and receiving, from the second computing device, a second media content item associated with a second interactive object of the interactive message. The systems and methods further provided for generating a second session data item for the second interactive object of the interactive message, adding the second session data item to the serialized data, and sending the serialized data to a third computing device to render the interactive message using the serialized data and display the rendered interactive message comprising the first media content item and the second media content item.
    Type: Application
    Filed: January 2, 2018
    Publication date: July 4, 2019
    Inventors: Grygoriy Kozhemiak, Victor Shaburov, Trevor Stephenson
  • Publication number: 20190207885
    Abstract: Systems and methods are provided for receiving a first media content item associated with a first interactive object of an interactive message, receiving a second media content item associated with a second interactive object of the interactive message, generating a third media content item based on the first media content item and second media content item, wherein the third media content item comprises combined features of the first media content item and the second media content item, and causing display of the generated third media content item.
    Type: Application
    Filed: December 31, 2018
    Publication date: July 4, 2019
    Inventors: Grygoriy Kozhemiak, Oleksandr Pyshchenko, Victor Shaburov, Trevor Stephenson, Aleksei Stoliar
  • Publication number: 20190156112
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Application
    Filed: January 29, 2019
    Publication date: May 23, 2019
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 10283162
    Abstract: A computer implemented method of triggering events in a video, the method comprising: providing a list of objects with their states and corresponding events in video such that each state from the list triggers at least one event of the corresponding events, wherein each object from the list has at least one state triggering at least one event of the corresponding events from the list in video; detecting at least one object from the list that at least partially and at least occasionally is presented in frames of the video; tracking the at least one object and its state; triggering at least one event of the corresponding events from the list in video in case the state of the at least one object matches with one of its states from the list.
    Type: Grant
    Filed: June 25, 2014
    Date of Patent: May 7, 2019
    Assignee: Avatar Merger Sub II, LLC
    Inventors: Elena Shaburova, Victor Shaburov