Patents by Inventor Yurii Monastyrshin

Yurii Monastyrshin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9747573
    Abstract: Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.
    Type: Grant
    Filed: March 23, 2015
    Date of Patent: August 29, 2017
    Assignee: Avatar Merger Sub II, LLC
    Inventors: Victor Shaburov, Yurii Monastyrshin
  • Patent number: 9232189
    Abstract: Methods and systems for real-time video processing can be used in video conferencing to modify image quality of background. One example method includes the steps of receiving a video including a sequence of images, identifying at least one object of interest (e.g., a face) in one or more of the images, detecting feature reference points of the at least one object of interest, and tracking the at least one object of interest in the video. The tracking may comprise aligning a virtual face mesh to the at least one object of interest in one or more of the images. Further, a background is identified in the images by separating the at least one object of interest from each image based on the virtual face mesh. The background is then modified in each of the images by blurring, changing a resolution, colors, or other parameters.
    Type: Grant
    Filed: March 18, 2015
    Date of Patent: January 5, 2016
    Assignee: AVATAR MERGER SUB II, LLC.
    Inventors: Victor Shaburov, Yurii Monastyrshin
  • Publication number: 20150286858
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Application
    Filed: March 18, 2015
    Publication date: October 8, 2015
    Inventors: Victor Shaburov, Yurii Monastyrshin
  • Publication number: 20150195491
    Abstract: Methods and systems for real-time video processing can be used in video conferencing to modify image quality of background. One example method includes the steps of receiving a video including a sequence of images, identifying at least one object of interest (e.g., a face) in one or more of the images, detecting feature reference points of the at least one object of interest, and tracking the at least one object of interest in the video. The tracking may comprise aligning a virtual face mesh to the at least one object of interest in one or more of the images. Further, a background is identified in the images by separating the at least one object of interest from each image based on the virtual face mesh. The background is then modified in each of the images by blurring, changing a resolution, colors, or other parameters.
    Type: Application
    Filed: March 18, 2015
    Publication date: July 9, 2015
    Inventors: Victor Shaburov, Yurii Monastyrshin
  • Publication number: 20150193718
    Abstract: Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.
    Type: Application
    Filed: March 23, 2015
    Publication date: July 9, 2015
    Inventors: Victor Shaburov, Yurii Monastyrshin