Patents by Inventor Yurii Monastyrshyn

Yurii Monastyrshyn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10991395
    Abstract: A computer-implemented method for real time video processing for changing a color of an object in a video, the method being performed in connection with a computerized system comprising a processing unit and a memory, the method comprising: providing an object in the video that at least partially and at least occasionally is presented in frames of the video; detecting the object in the video, wherein said detection comprises detecting feature reference points of the object; tracking the detected object in the video, wherein the tracking comprises creating a mesh that is based on the detected feature reference points of the object and aligning the mesh to the object in each frame; generating a set of node points on the created mesh based on a request for changing color, the set of node points defining an area the color of which is to be changed; and transforming the frames of the video in such way that the object's color is changed within the defined area when the object is presented in frames of the video.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: April 27, 2021
    Assignee: Snap Inc.
    Inventors: Elena Shaburova, Yurii Monastyrshyn
  • Patent number: 10963679
    Abstract: Methods and systems for recognizing emotions in video are disclosed. One example method includes the steps of receiving a video including images, detecting a face of the individual in the images, mapping the detected face to a model including at least two separated points in space corresponding to detectable emotions, each of the at least two separated points in space representing a plurality of example faces corresponding to one of the detectable emotions, determining the emotion of the individual from the detectable emotions based on a proximity of the detected face to the at least two separated points in space.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: March 30, 2021
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 10949655
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: March 16, 2021
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Publication number: 20210006759
    Abstract: Systems, devices, media, and methods are presented for receiving a set of images in a video stream, converting one or more images of the set of images to a set of single channel images, generating a set of approximation images from the set of single channel images, and generating a set of binarized images by thresholding the set of approximation images.
    Type: Application
    Filed: September 17, 2020
    Publication date: January 7, 2021
    Inventor: Yurii Monastyrshyn
  • Publication number: 20200410227
    Abstract: Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by receiving one or more images depicting an object of interest and determining pixels within the one or more images corresponding to the object of interest. The systems, devices, and methods identify a position of a portion of the object of interest and determine a direction for the portion of the object of interest. Based on the direction of the portion of the object of interest, a histogram threshold is dynamically modified for identifying pixels as corresponding to the portion of the object of interest. The portion of the object of interest is replaced with a graphical interface element aligned with the direction of the portion of the object of interest.
    Type: Application
    Filed: September 16, 2020
    Publication date: December 31, 2020
    Inventors: Victor Shaburov, Yurii Monastyrshyn, Oleksandr Pyshchenko, Sergei Kotcur
  • Patent number: 10812766
    Abstract: Systems, devices, media, and methods are presented for receiving a set of images in a video stream, converting one or more images of the set of images to a set of single channel images, generating a set of approximation images from the set of single channel images, and generating a set of binarized images by thresholding the set of approximation images.
    Type: Grant
    Filed: July 18, 2018
    Date of Patent: October 20, 2020
    Assignee: Snap Inc.
    Inventor: Yurii Monastyrshyn
  • Patent number: 10810418
    Abstract: Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by receiving one or more images depicting an object of interest and determining pixels within the one or more images corresponding to the object of interest. The systems, devices, and methods identify a position of a portion of the object of interest and determine a direction for the portion of the object of interest. Based on the direction of the portion of the object of interest, a histogram threshold is dynamically modified for identifying pixels as corresponding to the portion of the object of interest. The portion of the object of interest is replaced with a graphical interface element aligned with the direction of the portion of the object of interest.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: October 20, 2020
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn, Oleksandr Pyshchenko, Sergei Kotcur
  • Publication number: 20200186747
    Abstract: Systems, devices, media, and methods are presented for generating graphical representations within frames of a video stream in real time. The systems and methods receive a frame depicting a portion of a face, identify user input, identify positions on the portion of the face corresponding to the user input. The systems and methods generate a graphical representation of the user input linked to positions on the portion of the face and render the graphical representation within frames of the video stream in real time.
    Type: Application
    Filed: February 12, 2020
    Publication date: June 11, 2020
    Inventors: Yurii Monastyrshyn, lllia Tulupov
  • Patent number: 10609324
    Abstract: Systems, devices, media, and methods are presented for generating graphical representations within frames of a video stream in real time. The systems and methods receive a frames depicting a portion of a face, identify user input, identify positions on the portion of the face corresponding to the user input. The systems and methods generate a graphical representation of the user input linked to positions on the portion of the face and render the graphical representation within frames of the video stream in real time.
    Type: Grant
    Filed: July 18, 2016
    Date of Patent: March 31, 2020
    Assignee: Snap Inc.
    Inventors: Yurii Monastyrshyn, Illia Tulupov
  • Patent number: 10599917
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: March 24, 2020
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Publication number: 20190377458
    Abstract: Systems, devices, media, and methods are presented for controlling user interface with an object depicted within the user interface. The systems and methods initiates an augmented reality mode configured to present augmented reality elements within a graphical user interface. A face is detected within a field of view of an image capture component and presented within the graphical user interface. in response to detecting the face, the systems and methods sequentially present a set of augmented reality elements within the graphical user interface. A subset of the augmented reality elements and the face may be depicted contemporaneously. A movement is detected of at least a portion of the face relative to a first augmented reality element and presentation of the first augmented reality element is modified and at least one second augmented reality element is presented.
    Type: Application
    Filed: August 21, 2019
    Publication date: December 12, 2019
    Inventors: Yurii Monastyrshyn, Oleksandr Pyshchenko
  • Patent number: 10496947
    Abstract: Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.
    Type: Grant
    Filed: August 28, 2017
    Date of Patent: December 3, 2019
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Publication number: 20190354344
    Abstract: An audio control system can control interactions with an application or device using keywords spoken by a user of the device. The audio control system can use machine learning models (e.g., a neural network model) trained to recognize one or more keywords. Which machine learning model is activated can depend on the active location in the application or device. Responsive to detecting keywords, different actions are performed by the device, such as navigation to a pre-specified area of the application.
    Type: Application
    Filed: May 16, 2018
    Publication date: November 21, 2019
    Inventors: Xin Chen, Yurii Monastyrshyn, Fedir Poliakov, Shubham Vij
  • Patent number: 10430016
    Abstract: Systems, devices, media, and methods are presented for controlling user interface with an object depicted within the user interface. The systems and methods initiates an augmented reality mode configured to present augmented reality elements within a graphical user interface. A face is detected within a field of view of an image capture component and presented within the graphical user interface. In response to detecting the face, the systems and methods sequentially present a set of augmented reality elements within the graphical user interface. A subset of the augmented reality elements and the face may be depicted contemporaneously. A movement is detected of at least a portion of the face relative to a first augmented reality element and presentation of the first augmented reality element is modified and at least one second augmented reality element is presented.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: October 1, 2019
    Assignee: Snap Inc.
    Inventors: Yurii Monastyrshyn, Oleksandr Pyschchenko
  • Publication number: 20190196663
    Abstract: Systems, devices, media, and methods are presented for controlling user interface with an object depicted within the user interface. The systems and methods initiates an augmented reality mode configured to present augmented reality elements within a graphical user interface. A face is detected within a field of view of an image capture component and presented within the graphical user interface. In response to detecting the face, the systems and methods sequentially present a set of augmented reality elements within the graphical user interface. A subset of the augmented reality elements and the face may be depicted contemporaneously. A movement is detected of at least a portion of the face relative to a first augmented reality element and presentation of the first augmented reality element is modified and at least one second augmented reality element is presented.
    Type: Application
    Filed: December 22, 2017
    Publication date: June 27, 2019
    Inventors: Yurii Monastyrshyn, Oleksandr Pyschchenko
  • Publication number: 20190156112
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Application
    Filed: January 29, 2019
    Publication date: May 23, 2019
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 10255948
    Abstract: A computer-implemented method for real time video processing for changing a color of an object in a video, the method being performed in connection with a computerized system comprising a processing unit and a memory, the method comprising: providing an object in the video that at least partially and at least occasionally is presented in frames of the video; detecting the object in the video, wherein said detection comprises detecting feature reference points of the object; tracking the detected object in the video, wherein the tracking comprises creating a mesh that is based on the detected feature reference points of the object and aligning the mesh to the object in each frame; generating a set of node points on the created mesh based on a request for changing color, the set of node points defining an area the color of which is to be changed; and transforming the frames of the video in such way that the object's color is changed within the defined area when the object is presented in frames of the video.
    Type: Grant
    Filed: July 13, 2016
    Date of Patent: April 9, 2019
    Assignee: Avatar Merger Sub II, LLC
    Inventors: Elena Shaburova, Yurii Monastyrshyn
  • Patent number: 10255488
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: April 9, 2019
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 10235562
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: March 19, 2019
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 10116901
    Abstract: Methods and systems for real-time video processing can be used in video conferencing to modify image quality of background. One example method includes the steps of receiving a video including a sequence of images, identifying at least one object of interest (e.g., a face) in one or more of the images, detecting feature reference points of the at least one object of interest, and tracking the at least one object of interest in the video. The tracking may comprise aligning a virtual face mesh to the at least one object of interest in one or more of the images. Further, a background is identified in the images by separating the at least one object of interest from each image based on the virtual face mesh. The background is then modified in each of the images by blurring, changing a resolution, colors, or other parameters.
    Type: Grant
    Filed: January 4, 2016
    Date of Patent: October 30, 2018
    Assignee: Avatar Merger Sub II, LLC
    Inventors: Victor Shaburov, Yurii Monastyrshyn