Patents by Inventor Yurii Monastyrshyn

Yurii Monastyrshyn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240119396
    Abstract: Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.
    Type: Application
    Filed: December 15, 2023
    Publication date: April 11, 2024
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 11922356
    Abstract: Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: March 5, 2024
    Assignee: SNAP INC.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Publication number: 20230362327
    Abstract: Systems, devices, media, and methods are presented for generating graphical representations within frames of a video stream in real time. The systems and methods receive a frame depicting a portion of a face, identify user input, identify positions on the portion of the face corresponding to the user input. The systems and methods generate a graphical representation of the user input linked to positions on the portion of the face and render the graphical representation within frames of the video stream in real time.
    Type: Application
    Filed: July 18, 2023
    Publication date: November 9, 2023
    Inventors: Yurii Monastyrshyn, Illia Tulupov
  • Publication number: 20230319126
    Abstract: Method for triggering changes to real-time special effects included in a live streaming video starts with a processor transmitting in real-time a video stream captured by a camera via a network. The processor causes a live streaming interface that includes the video stream to be displayed on the plurality of client devices. The processor receives a trigger to apply one of a plurality of special effects to the video stream and determines a first special effect of the plurality of special effects is associated with the trigger. The processor applies in real-time the first special effect to the video stream to generate a video stream having the first special effect and transmits in real-time the video stream having the first special effect via the network. The processor causes the live streaming interface that includes the video stream having the first special effect to be displayed on the plurality of client devices. Other embodiments are disclosed.
    Type: Application
    Filed: June 6, 2023
    Publication date: October 5, 2023
    Inventors: Artem Gaiduchenko, Artem Yerofieiev, Bohdan Pozharskyi, Gabriel Lupin, Oleksii Kholovchuk, Travis Chen, Yurii Monastyrshyn, Denys Makoviichuk
  • Patent number: 11750770
    Abstract: Systems, devices, media, and methods are presented for generating graphical representations within frames of a video stream in real time. The systems and methods receive a frame depicting a portion of a face, identify user input, identify positions on the portion of the face corresponding to the user input. The systems and methods generate a graphical representation of the user input linked to positions on the portion of the face and render the graphical representation within frames of the video stream in real time.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: September 5, 2023
    Assignee: Snap Inc.
    Inventors: Yurii Monastyrshyn, Illia Tulupov
  • Patent number: 11711414
    Abstract: Method for triggering changes to real-time special effects included in a live streaming video starts with a processor transmitting in real-time a video stream captured by a camera via a network. The processor causes a live streaming interface that includes the video stream to be displayed on the plurality of client devices. The processor receives a trigger to apply one of a plurality of special effects to the video stream and determines a first special effect of the plurality of special effects is associated with the trigger. The processor applies in real-time the first special effect to the video stream to generate a video stream having the first special effect and transmits in real-time the video stream having the first special effect via the network. The processor causes the live streaming interface that includes the video stream having the first special effect to be displayed on the plurality of client devices. Other embodiments are disclosed.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: July 25, 2023
    Assignee: Snap Inc.
    Inventors: Artem Gaiduchenko, Artem Yerofieiev, Bohdan Pozharskyi, Gabriel Lupin, Oleksii Kholovchuk, Travis Chen, Yurii Monastyrshyn, Denys Makoviichuk
  • Patent number: 11676412
    Abstract: Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by receiving one or more images depicting an object of interest and determining pixels within the one or more images corresponding to the object of interest. The systems, devices, and methods identify a position of a portion of the object of interest and determine a direction for the portion of the object of interest. Based on the direction of the portion of the object of interest, a histogram threshold is dynamically modified for identifying pixels as corresponding to the portion of the object of interest. The portion of the object of interest is replaced with a graphical interface element aligned with the direction of the portion of the object of interest.
    Type: Grant
    Filed: September 16, 2020
    Date of Patent: June 13, 2023
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn, Oleksandr Pyshchenko, Sergei Kotcur
  • Patent number: 11652956
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Grant
    Filed: February 10, 2021
    Date of Patent: May 16, 2023
    Assignee: SNAP INC.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 11543929
    Abstract: Systems, devices, media, and methods are presented for controlling user interface with an object depicted within the user interface. The systems and methods initiates an augmented reality mode configured to present augmented reality elements within a graphical user interface. A face is detected within a field of view of an image capture component and presented within the graphical user interface. In response to detecting the face, the systems and methods sequentially present a set of augmented reality elements within the graphical user interface. A subset of the augmented reality elements and the face may be depicted contemporaneously. A movement is detected of at least a portion of the face relative to a first augmented reality element and presentation of the first augmented reality element is modified and at least one second augmented reality element is presented.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: January 3, 2023
    Assignee: Snap Inc.
    Inventors: Yurii Monastyrshyn, Oleksandr Pyshchenko
  • Publication number: 20220365748
    Abstract: An audio control system can control interactions with an application or device using keywords spoken by a user of the device. The audio control system can use machine learning models (e.g., a neural network model) trained to recognize one or more keywords. Which machine learning model is activated can depend on the active location in the application or device. Responsive to detecting keywords, different actions are performed by the device, such as navigation to a pre-specified area of the application.
    Type: Application
    Filed: July 29, 2022
    Publication date: November 17, 2022
    Inventors: Xin Chen, Yurii Monastyrshyn, Fedir Poliakov, Shubham Vij
  • Patent number: 11487501
    Abstract: An audio control system can control interactions with an application or device using keywords spoken by a user of the device. The audio control system can use machine learning models (e.g., a neural network model) trained to recognize one or more keywords. Which machine learning model is activated can depend on the active location in the application or device. Responsive to detecting keywords, different actions are performed by the device, such as navigation to a pre-specified area of the application.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: November 1, 2022
    Assignee: Snap Inc.
    Inventors: Xin Chen, Yurii Monastyrshyn, Fedir Poliakov, Shubham Vij
  • Patent number: 11450085
    Abstract: Systems, devices, media, and methods are presented for receiving a set of images in a video stream, receiving input that selects an image capture mode icon from a plurality of image capture mode icons; selecting a blur operation from a plurality of blur operations based on the image capture mode icon selected by the input; and modifying the video stream based on the selected blur operation.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: September 20, 2022
    Assignee: Snap Inc.
    Inventor: Yurii Monastyrshyn
  • Publication number: 20220166816
    Abstract: Method for triggering changes to real-time special effects included in a live streaming video starts with a processor transmitting in real-time a video stream captured by a camera via a network. The processor causes a live streaming interface that includes the video stream to be displayed on the plurality of client devices. The processor receives a trigger to apply one of a plurality of special effects to the video stream and determines a first special effect of the plurality of special effects is associated with the trigger. The processor applies in real-time the first special effect to the video stream to generate a video stream having the first special effect and transmits in real-time the video stream having the first special effect via the network. The processor causes the live streaming interface that includes the video stream having the first special effect to be displayed on the plurality of client devices. Other embodiments are disclosed.
    Type: Application
    Filed: November 30, 2021
    Publication date: May 26, 2022
    Inventors: Artem Gaiduchenko, Artem Yerofieiev, Bohdan Pozharskyi, Gabriel Lupin, Oleksii Kholovchuk, Travis Chen, Yurii Monastyrshyn, Denys Makoviichuk
  • Patent number: 11290682
    Abstract: Methods and systems for real-time video processing can be used in video conferencing to modify image quality of background. One example method includes the steps of receiving a video including a sequence of images, identifying at least one object of interest (e.g., a face) in one or more of the images, detecting feature reference points of the at least one object of interest, and tracking the at least one object of interest in the video. The tracking may comprise aligning a virtual face mesh to the at least one object of interest in one or more of the images. Further, a background is identified in the images by separating the at least one object of interest from each image based on the virtual face mesh. The background is then modified in each of the images by blurring, changing a resolution, colors, or other parameters.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: March 29, 2022
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Publication number: 20220078370
    Abstract: Systems, devices, media, and methods are presented for generating graphical representations within frames of a video stream in real time. The systems and methods receive a frame depicting a portion of a face, identify user input, identify positions on the portion of the face corresponding to the user input. The systems and methods generate a graphical representation of the user input linked to positions on the portion of the face and render the graphical representation within frames of the video stream in real time.
    Type: Application
    Filed: November 18, 2021
    Publication date: March 10, 2022
    Inventors: Yurii Monastyrshyn, Illia Tulupov
  • Patent number: 11212482
    Abstract: Systems, devices, media, and methods are presented for generating graphical representations within frames of a video stream in real time. The systems and methods receive a frame depicting a portion of a face, identify user input, identify positions on the portion of the face corresponding to the user input. The systems and methods generate a graphical representation of the user input linked to positions on the portion of the face and render the graphical representation within frames of the video stream in real time.
    Type: Grant
    Filed: February 12, 2020
    Date of Patent: December 28, 2021
    Assignee: Snap Inc.
    Inventors: Yurii Monastyrshyn, Illia Tulupov
  • Patent number: 11212331
    Abstract: Method for triggering changes to real-time special effects included in a live streaming video starts with a processor transmitting in real-time a video stream captured by a camera via a network. The processor causes a live streaming interface that includes the video stream to be displayed on the plurality of client devices. The processor receives a trigger to apply one of a plurality of special effects to the video stream and determines a first special effect of the plurality of special effects is associated with the trigger. The processor applies in real-time the first special effect to the video stream to generate a video stream having the first special effect and transmits in real-time the video stream having the first special effect via the network. The processor causes the live streaming interface that includes the video stream having the first special effect to be displayed on the plurality of client devices. Other embodiments are disclosed.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: December 28, 2021
    Assignee: Snap Inc.
    Inventors: Artem Gaiduchenko, Artem Yerofieiev, Bohdan Pozharskyi, Gabriel Lupin, Oleksii Kholovchuk, Travis Chen, Yurii Monastyrshyn, Denys Makoviichuk
  • Publication number: 20210223919
    Abstract: Systems, devices, media, and methods are presented for controlling user interface with an object depicted within the user interface. The systems and methods initiates an augmented reality mode configured to present augmented reality elements within a graphical user interface. A face is detected within a field of view of an image capture component and presented within the graphical user interface. In response to detecting the face, the systems and methods sequentially present a set of augmented reality elements within the graphical user interface. A subset of the augmented reality elements and the face may be depicted contemporaneously. A movement is detected of at least a portion of the face relative to a first augmented reality element and presentation of the first augmented reality element is modified and at least one second augmented reality element is presented.
    Type: Application
    Filed: April 6, 2021
    Publication date: July 22, 2021
    Inventors: Yurii Monastyrshyn, Oleksandr Pyshchenko
  • Publication number: 20210192193
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Application
    Filed: February 10, 2021
    Publication date: June 24, 2021
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 10996811
    Abstract: Systems, devices, media, and methods are presented for controlling user interface with an object depicted within the user interface. The systems and methods initiates an augmented reality mode configured to present augmented reality elements within a graphical user interface. A face is detected within a field of view of an image capture component and presented within the graphical user interface. In response to detecting the face, the systems and methods sequentially present a set of augmented reality elements within the graphical user interface. A subset of the augmented reality elements and the face may be depicted contemporaneously. A movement is detected of at least a portion of the face relative to a first augmented reality element and presentation of the first augmented reality element is modified and at least one second augmented reality element is presented.
    Type: Grant
    Filed: August 21, 2019
    Date of Patent: May 4, 2021
    Assignee: Snap Inc.
    Inventors: Yurii Monastyrshyn, Oleksandr Pyshchenko