Patents by Inventor Yurii Monastyrshyn
Yurii Monastyrshyn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240119396Abstract: Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.Type: ApplicationFiled: December 15, 2023Publication date: April 11, 2024Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Patent number: 11922356Abstract: Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.Type: GrantFiled: October 29, 2019Date of Patent: March 5, 2024Assignee: SNAP INC.Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Publication number: 20230362327Abstract: Systems, devices, media, and methods are presented for generating graphical representations within frames of a video stream in real time. The systems and methods receive a frame depicting a portion of a face, identify user input, identify positions on the portion of the face corresponding to the user input. The systems and methods generate a graphical representation of the user input linked to positions on the portion of the face and render the graphical representation within frames of the video stream in real time.Type: ApplicationFiled: July 18, 2023Publication date: November 9, 2023Inventors: Yurii Monastyrshyn, Illia Tulupov
-
Publication number: 20230319126Abstract: Method for triggering changes to real-time special effects included in a live streaming video starts with a processor transmitting in real-time a video stream captured by a camera via a network. The processor causes a live streaming interface that includes the video stream to be displayed on the plurality of client devices. The processor receives a trigger to apply one of a plurality of special effects to the video stream and determines a first special effect of the plurality of special effects is associated with the trigger. The processor applies in real-time the first special effect to the video stream to generate a video stream having the first special effect and transmits in real-time the video stream having the first special effect via the network. The processor causes the live streaming interface that includes the video stream having the first special effect to be displayed on the plurality of client devices. Other embodiments are disclosed.Type: ApplicationFiled: June 6, 2023Publication date: October 5, 2023Inventors: Artem Gaiduchenko, Artem Yerofieiev, Bohdan Pozharskyi, Gabriel Lupin, Oleksii Kholovchuk, Travis Chen, Yurii Monastyrshyn, Denys Makoviichuk
-
Patent number: 11750770Abstract: Systems, devices, media, and methods are presented for generating graphical representations within frames of a video stream in real time. The systems and methods receive a frame depicting a portion of a face, identify user input, identify positions on the portion of the face corresponding to the user input. The systems and methods generate a graphical representation of the user input linked to positions on the portion of the face and render the graphical representation within frames of the video stream in real time.Type: GrantFiled: November 18, 2021Date of Patent: September 5, 2023Assignee: Snap Inc.Inventors: Yurii Monastyrshyn, Illia Tulupov
-
Patent number: 11711414Abstract: Method for triggering changes to real-time special effects included in a live streaming video starts with a processor transmitting in real-time a video stream captured by a camera via a network. The processor causes a live streaming interface that includes the video stream to be displayed on the plurality of client devices. The processor receives a trigger to apply one of a plurality of special effects to the video stream and determines a first special effect of the plurality of special effects is associated with the trigger. The processor applies in real-time the first special effect to the video stream to generate a video stream having the first special effect and transmits in real-time the video stream having the first special effect via the network. The processor causes the live streaming interface that includes the video stream having the first special effect to be displayed on the plurality of client devices. Other embodiments are disclosed.Type: GrantFiled: November 30, 2021Date of Patent: July 25, 2023Assignee: Snap Inc.Inventors: Artem Gaiduchenko, Artem Yerofieiev, Bohdan Pozharskyi, Gabriel Lupin, Oleksii Kholovchuk, Travis Chen, Yurii Monastyrshyn, Denys Makoviichuk
-
Patent number: 11676412Abstract: Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by receiving one or more images depicting an object of interest and determining pixels within the one or more images corresponding to the object of interest. The systems, devices, and methods identify a position of a portion of the object of interest and determine a direction for the portion of the object of interest. Based on the direction of the portion of the object of interest, a histogram threshold is dynamically modified for identifying pixels as corresponding to the portion of the object of interest. The portion of the object of interest is replaced with a graphical interface element aligned with the direction of the portion of the object of interest.Type: GrantFiled: September 16, 2020Date of Patent: June 13, 2023Assignee: Snap Inc.Inventors: Victor Shaburov, Yurii Monastyrshyn, Oleksandr Pyshchenko, Sergei Kotcur
-
Patent number: 11652956Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.Type: GrantFiled: February 10, 2021Date of Patent: May 16, 2023Assignee: SNAP INC.Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Patent number: 11543929Abstract: Systems, devices, media, and methods are presented for controlling user interface with an object depicted within the user interface. The systems and methods initiates an augmented reality mode configured to present augmented reality elements within a graphical user interface. A face is detected within a field of view of an image capture component and presented within the graphical user interface. In response to detecting the face, the systems and methods sequentially present a set of augmented reality elements within the graphical user interface. A subset of the augmented reality elements and the face may be depicted contemporaneously. A movement is detected of at least a portion of the face relative to a first augmented reality element and presentation of the first augmented reality element is modified and at least one second augmented reality element is presented.Type: GrantFiled: April 6, 2021Date of Patent: January 3, 2023Assignee: Snap Inc.Inventors: Yurii Monastyrshyn, Oleksandr Pyshchenko
-
Publication number: 20220365748Abstract: An audio control system can control interactions with an application or device using keywords spoken by a user of the device. The audio control system can use machine learning models (e.g., a neural network model) trained to recognize one or more keywords. Which machine learning model is activated can depend on the active location in the application or device. Responsive to detecting keywords, different actions are performed by the device, such as navigation to a pre-specified area of the application.Type: ApplicationFiled: July 29, 2022Publication date: November 17, 2022Inventors: Xin Chen, Yurii Monastyrshyn, Fedir Poliakov, Shubham Vij
-
Patent number: 11487501Abstract: An audio control system can control interactions with an application or device using keywords spoken by a user of the device. The audio control system can use machine learning models (e.g., a neural network model) trained to recognize one or more keywords. Which machine learning model is activated can depend on the active location in the application or device. Responsive to detecting keywords, different actions are performed by the device, such as navigation to a pre-specified area of the application.Type: GrantFiled: May 16, 2018Date of Patent: November 1, 2022Assignee: Snap Inc.Inventors: Xin Chen, Yurii Monastyrshyn, Fedir Poliakov, Shubham Vij
-
Patent number: 11450085Abstract: Systems, devices, media, and methods are presented for receiving a set of images in a video stream, receiving input that selects an image capture mode icon from a plurality of image capture mode icons; selecting a blur operation from a plurality of blur operations based on the image capture mode icon selected by the input; and modifying the video stream based on the selected blur operation.Type: GrantFiled: September 17, 2020Date of Patent: September 20, 2022Assignee: Snap Inc.Inventor: Yurii Monastyrshyn
-
Publication number: 20220166816Abstract: Method for triggering changes to real-time special effects included in a live streaming video starts with a processor transmitting in real-time a video stream captured by a camera via a network. The processor causes a live streaming interface that includes the video stream to be displayed on the plurality of client devices. The processor receives a trigger to apply one of a plurality of special effects to the video stream and determines a first special effect of the plurality of special effects is associated with the trigger. The processor applies in real-time the first special effect to the video stream to generate a video stream having the first special effect and transmits in real-time the video stream having the first special effect via the network. The processor causes the live streaming interface that includes the video stream having the first special effect to be displayed on the plurality of client devices. Other embodiments are disclosed.Type: ApplicationFiled: November 30, 2021Publication date: May 26, 2022Inventors: Artem Gaiduchenko, Artem Yerofieiev, Bohdan Pozharskyi, Gabriel Lupin, Oleksii Kholovchuk, Travis Chen, Yurii Monastyrshyn, Denys Makoviichuk
-
Patent number: 11290682Abstract: Methods and systems for real-time video processing can be used in video conferencing to modify image quality of background. One example method includes the steps of receiving a video including a sequence of images, identifying at least one object of interest (e.g., a face) in one or more of the images, detecting feature reference points of the at least one object of interest, and tracking the at least one object of interest in the video. The tracking may comprise aligning a virtual face mesh to the at least one object of interest in one or more of the images. Further, a background is identified in the images by separating the at least one object of interest from each image based on the virtual face mesh. The background is then modified in each of the images by blurring, changing a resolution, colors, or other parameters.Type: GrantFiled: September 25, 2018Date of Patent: March 29, 2022Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Publication number: 20220078370Abstract: Systems, devices, media, and methods are presented for generating graphical representations within frames of a video stream in real time. The systems and methods receive a frame depicting a portion of a face, identify user input, identify positions on the portion of the face corresponding to the user input. The systems and methods generate a graphical representation of the user input linked to positions on the portion of the face and render the graphical representation within frames of the video stream in real time.Type: ApplicationFiled: November 18, 2021Publication date: March 10, 2022Inventors: Yurii Monastyrshyn, Illia Tulupov
-
Patent number: 11212482Abstract: Systems, devices, media, and methods are presented for generating graphical representations within frames of a video stream in real time. The systems and methods receive a frame depicting a portion of a face, identify user input, identify positions on the portion of the face corresponding to the user input. The systems and methods generate a graphical representation of the user input linked to positions on the portion of the face and render the graphical representation within frames of the video stream in real time.Type: GrantFiled: February 12, 2020Date of Patent: December 28, 2021Assignee: Snap Inc.Inventors: Yurii Monastyrshyn, Illia Tulupov
-
Patent number: 11212331Abstract: Method for triggering changes to real-time special effects included in a live streaming video starts with a processor transmitting in real-time a video stream captured by a camera via a network. The processor causes a live streaming interface that includes the video stream to be displayed on the plurality of client devices. The processor receives a trigger to apply one of a plurality of special effects to the video stream and determines a first special effect of the plurality of special effects is associated with the trigger. The processor applies in real-time the first special effect to the video stream to generate a video stream having the first special effect and transmits in real-time the video stream having the first special effect via the network. The processor causes the live streaming interface that includes the video stream having the first special effect to be displayed on the plurality of client devices. Other embodiments are disclosed.Type: GrantFiled: January 31, 2019Date of Patent: December 28, 2021Assignee: Snap Inc.Inventors: Artem Gaiduchenko, Artem Yerofieiev, Bohdan Pozharskyi, Gabriel Lupin, Oleksii Kholovchuk, Travis Chen, Yurii Monastyrshyn, Denys Makoviichuk
-
Publication number: 20210223919Abstract: Systems, devices, media, and methods are presented for controlling user interface with an object depicted within the user interface. The systems and methods initiates an augmented reality mode configured to present augmented reality elements within a graphical user interface. A face is detected within a field of view of an image capture component and presented within the graphical user interface. In response to detecting the face, the systems and methods sequentially present a set of augmented reality elements within the graphical user interface. A subset of the augmented reality elements and the face may be depicted contemporaneously. A movement is detected of at least a portion of the face relative to a first augmented reality element and presentation of the first augmented reality element is modified and at least one second augmented reality element is presented.Type: ApplicationFiled: April 6, 2021Publication date: July 22, 2021Inventors: Yurii Monastyrshyn, Oleksandr Pyshchenko
-
Publication number: 20210192193Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.Type: ApplicationFiled: February 10, 2021Publication date: June 24, 2021Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Patent number: 10996811Abstract: Systems, devices, media, and methods are presented for controlling user interface with an object depicted within the user interface. The systems and methods initiates an augmented reality mode configured to present augmented reality elements within a graphical user interface. A face is detected within a field of view of an image capture component and presented within the graphical user interface. In response to detecting the face, the systems and methods sequentially present a set of augmented reality elements within the graphical user interface. A subset of the augmented reality elements and the face may be depicted contemporaneously. A movement is detected of at least a portion of the face relative to a first augmented reality element and presentation of the first augmented reality element is modified and at least one second augmented reality element is presented.Type: GrantFiled: August 21, 2019Date of Patent: May 4, 2021Assignee: Snap Inc.Inventors: Yurii Monastyrshyn, Oleksandr Pyshchenko