Patents by Inventor Yurii Monastyrshyn
Yurii Monastyrshyn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10991395Abstract: A computer-implemented method for real time video processing for changing a color of an object in a video, the method being performed in connection with a computerized system comprising a processing unit and a memory, the method comprising: providing an object in the video that at least partially and at least occasionally is presented in frames of the video; detecting the object in the video, wherein said detection comprises detecting feature reference points of the object; tracking the detected object in the video, wherein the tracking comprises creating a mesh that is based on the detected feature reference points of the object and aligning the mesh to the object in each frame; generating a set of node points on the created mesh based on a request for changing color, the set of node points defining an area the color of which is to be changed; and transforming the frames of the video in such way that the object's color is changed within the defined area when the object is presented in frames of the video.Type: GrantFiled: February 15, 2019Date of Patent: April 27, 2021Assignee: Snap Inc.Inventors: Elena Shaburova, Yurii Monastyrshyn
-
Patent number: 10963679Abstract: Methods and systems for recognizing emotions in video are disclosed. One example method includes the steps of receiving a video including images, detecting a face of the individual in the images, mapping the detected face to a model including at least two separated points in space corresponding to detectable emotions, each of the at least two separated points in space representing a plurality of example faces corresponding to one of the detectable emotions, determining the emotion of the individual from the detectable emotions based on a proximity of the detected face to the at least two separated points in space.Type: GrantFiled: March 12, 2019Date of Patent: March 30, 2021Assignee: Snap Inc.Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Patent number: 10949655Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.Type: GrantFiled: January 29, 2019Date of Patent: March 16, 2021Assignee: Snap Inc.Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Publication number: 20210006759Abstract: Systems, devices, media, and methods are presented for receiving a set of images in a video stream, converting one or more images of the set of images to a set of single channel images, generating a set of approximation images from the set of single channel images, and generating a set of binarized images by thresholding the set of approximation images.Type: ApplicationFiled: September 17, 2020Publication date: January 7, 2021Inventor: Yurii Monastyrshyn
-
Publication number: 20200410227Abstract: Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by receiving one or more images depicting an object of interest and determining pixels within the one or more images corresponding to the object of interest. The systems, devices, and methods identify a position of a portion of the object of interest and determine a direction for the portion of the object of interest. Based on the direction of the portion of the object of interest, a histogram threshold is dynamically modified for identifying pixels as corresponding to the portion of the object of interest. The portion of the object of interest is replaced with a graphical interface element aligned with the direction of the portion of the object of interest.Type: ApplicationFiled: September 16, 2020Publication date: December 31, 2020Inventors: Victor Shaburov, Yurii Monastyrshyn, Oleksandr Pyshchenko, Sergei Kotcur
-
Patent number: 10812766Abstract: Systems, devices, media, and methods are presented for receiving a set of images in a video stream, converting one or more images of the set of images to a set of single channel images, generating a set of approximation images from the set of single channel images, and generating a set of binarized images by thresholding the set of approximation images.Type: GrantFiled: July 18, 2018Date of Patent: October 20, 2020Assignee: Snap Inc.Inventor: Yurii Monastyrshyn
-
Patent number: 10810418Abstract: Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by receiving one or more images depicting an object of interest and determining pixels within the one or more images corresponding to the object of interest. The systems, devices, and methods identify a position of a portion of the object of interest and determine a direction for the portion of the object of interest. Based on the direction of the portion of the object of interest, a histogram threshold is dynamically modified for identifying pixels as corresponding to the portion of the object of interest. The portion of the object of interest is replaced with a graphical interface element aligned with the direction of the portion of the object of interest.Type: GrantFiled: September 5, 2018Date of Patent: October 20, 2020Assignee: Snap Inc.Inventors: Victor Shaburov, Yurii Monastyrshyn, Oleksandr Pyshchenko, Sergei Kotcur
-
Publication number: 20200186747Abstract: Systems, devices, media, and methods are presented for generating graphical representations within frames of a video stream in real time. The systems and methods receive a frame depicting a portion of a face, identify user input, identify positions on the portion of the face corresponding to the user input. The systems and methods generate a graphical representation of the user input linked to positions on the portion of the face and render the graphical representation within frames of the video stream in real time.Type: ApplicationFiled: February 12, 2020Publication date: June 11, 2020Inventors: Yurii Monastyrshyn, lllia Tulupov
-
Patent number: 10609324Abstract: Systems, devices, media, and methods are presented for generating graphical representations within frames of a video stream in real time. The systems and methods receive a frames depicting a portion of a face, identify user input, identify positions on the portion of the face corresponding to the user input. The systems and methods generate a graphical representation of the user input linked to positions on the portion of the face and render the graphical representation within frames of the video stream in real time.Type: GrantFiled: July 18, 2016Date of Patent: March 31, 2020Assignee: Snap Inc.Inventors: Yurii Monastyrshyn, Illia Tulupov
-
Patent number: 10599917Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.Type: GrantFiled: December 1, 2017Date of Patent: March 24, 2020Assignee: Snap Inc.Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Publication number: 20190377458Abstract: Systems, devices, media, and methods are presented for controlling user interface with an object depicted within the user interface. The systems and methods initiates an augmented reality mode configured to present augmented reality elements within a graphical user interface. A face is detected within a field of view of an image capture component and presented within the graphical user interface. in response to detecting the face, the systems and methods sequentially present a set of augmented reality elements within the graphical user interface. A subset of the augmented reality elements and the face may be depicted contemporaneously. A movement is detected of at least a portion of the face relative to a first augmented reality element and presentation of the first augmented reality element is modified and at least one second augmented reality element is presented.Type: ApplicationFiled: August 21, 2019Publication date: December 12, 2019Inventors: Yurii Monastyrshyn, Oleksandr Pyshchenko
-
Patent number: 10496947Abstract: Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.Type: GrantFiled: August 28, 2017Date of Patent: December 3, 2019Assignee: Snap Inc.Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Publication number: 20190354344Abstract: An audio control system can control interactions with an application or device using keywords spoken by a user of the device. The audio control system can use machine learning models (e.g., a neural network model) trained to recognize one or more keywords. Which machine learning model is activated can depend on the active location in the application or device. Responsive to detecting keywords, different actions are performed by the device, such as navigation to a pre-specified area of the application.Type: ApplicationFiled: May 16, 2018Publication date: November 21, 2019Inventors: Xin Chen, Yurii Monastyrshyn, Fedir Poliakov, Shubham Vij
-
Patent number: 10430016Abstract: Systems, devices, media, and methods are presented for controlling user interface with an object depicted within the user interface. The systems and methods initiates an augmented reality mode configured to present augmented reality elements within a graphical user interface. A face is detected within a field of view of an image capture component and presented within the graphical user interface. In response to detecting the face, the systems and methods sequentially present a set of augmented reality elements within the graphical user interface. A subset of the augmented reality elements and the face may be depicted contemporaneously. A movement is detected of at least a portion of the face relative to a first augmented reality element and presentation of the first augmented reality element is modified and at least one second augmented reality element is presented.Type: GrantFiled: December 22, 2017Date of Patent: October 1, 2019Assignee: Snap Inc.Inventors: Yurii Monastyrshyn, Oleksandr Pyschchenko
-
Publication number: 20190196663Abstract: Systems, devices, media, and methods are presented for controlling user interface with an object depicted within the user interface. The systems and methods initiates an augmented reality mode configured to present augmented reality elements within a graphical user interface. A face is detected within a field of view of an image capture component and presented within the graphical user interface. In response to detecting the face, the systems and methods sequentially present a set of augmented reality elements within the graphical user interface. A subset of the augmented reality elements and the face may be depicted contemporaneously. A movement is detected of at least a portion of the face relative to a first augmented reality element and presentation of the first augmented reality element is modified and at least one second augmented reality element is presented.Type: ApplicationFiled: December 22, 2017Publication date: June 27, 2019Inventors: Yurii Monastyrshyn, Oleksandr Pyschchenko
-
Publication number: 20190156112Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.Type: ApplicationFiled: January 29, 2019Publication date: May 23, 2019Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Patent number: 10255948Abstract: A computer-implemented method for real time video processing for changing a color of an object in a video, the method being performed in connection with a computerized system comprising a processing unit and a memory, the method comprising: providing an object in the video that at least partially and at least occasionally is presented in frames of the video; detecting the object in the video, wherein said detection comprises detecting feature reference points of the object; tracking the detected object in the video, wherein the tracking comprises creating a mesh that is based on the detected feature reference points of the object and aligning the mesh to the object in each frame; generating a set of node points on the created mesh based on a request for changing color, the set of node points defining an area the color of which is to be changed; and transforming the frames of the video in such way that the object's color is changed within the defined area when the object is presented in frames of the video.Type: GrantFiled: July 13, 2016Date of Patent: April 9, 2019Assignee: Avatar Merger Sub II, LLCInventors: Elena Shaburova, Yurii Monastyrshyn
-
Patent number: 10255488Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g.Type: GrantFiled: December 1, 2017Date of Patent: April 9, 2019Assignee: Snap Inc.Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Patent number: 10235562Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.Type: GrantFiled: November 17, 2017Date of Patent: March 19, 2019Assignee: Snap Inc.Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Patent number: 10116901Abstract: Methods and systems for real-time video processing can be used in video conferencing to modify image quality of background. One example method includes the steps of receiving a video including a sequence of images, identifying at least one object of interest (e.g., a face) in one or more of the images, detecting feature reference points of the at least one object of interest, and tracking the at least one object of interest in the video. The tracking may comprise aligning a virtual face mesh to the at least one object of interest in one or more of the images. Further, a background is identified in the images by separating the at least one object of interest from each image based on the virtual face mesh. The background is then modified in each of the images by blurring, changing a resolution, colors, or other parameters.Type: GrantFiled: January 4, 2016Date of Patent: October 30, 2018Assignee: Avatar Merger Sub II, LLCInventors: Victor Shaburov, Yurii Monastyrshyn