Patents by Inventor Victor Shaburov
Victor Shaburov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10255488Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g.Type: GrantFiled: December 1, 2017Date of Patent: April 9, 2019Assignee: Snap Inc.Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Patent number: 10235562Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.Type: GrantFiled: November 17, 2017Date of Patent: March 19, 2019Assignee: Snap Inc.Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Patent number: 10116901Abstract: Methods and systems for real-time video processing can be used in video conferencing to modify image quality of background. One example method includes the steps of receiving a video including a sequence of images, identifying at least one object of interest (e.g., a face) in one or more of the images, detecting feature reference points of the at least one object of interest, and tracking the at least one object of interest in the video. The tracking may comprise aligning a virtual face mesh to the at least one object of interest in one or more of the images. Further, a background is identified in the images by separating the at least one object of interest from each image based on the virtual face mesh. The background is then modified in each of the images by blurring, changing a resolution, colors, or other parameters.Type: GrantFiled: January 4, 2016Date of Patent: October 30, 2018Assignee: Avatar Merger Sub II, LLCInventors: Victor Shaburov, Yurii Monastyrshyn
-
Patent number: 10102423Abstract: Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by receiving one or more images depicting an object of interest and determining pixels within the one or more images corresponding to the object of interest. The systems, devices, and methods identify a position of a portion of the object of interest and determine a direction for the portion of the object of interest. Based on the direction of the portion of the object of interest, a histogram threshold is dynamically modified for identifying pixels as corresponding to the portion of the object of interest. The portion of the object of interest is replaced with a graphical interface element aligned with the direction of the portion of the object of interest.Type: GrantFiled: June 30, 2016Date of Patent: October 16, 2018Assignee: Snap Inc.Inventors: Victor Shaburov, Yurii Monastyrshyn, Oleksandr Pyshchenko, Sergei Kotcur
-
Publication number: 20180253901Abstract: A context based augmented reality system can be used to display augmented reality elements over a live video feed on a client device. The augmented reality elements can be selected based on a number of context inputs generated by the client device. The context inputs can include location data of the client device and location data of nearby physical places that have preconfigured augmented elements. The preconfigured augmented elements can be preconfigured to exhibit a design scheme of the corresponding physical place.Type: ApplicationFiled: July 19, 2017Publication date: September 6, 2018Inventors: Ebony James Charlton, Jokubas Dargis, Eitan Pilipski, Dhritiman Sagar, Victor Shaburov
-
Publication number: 20180075292Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.Type: ApplicationFiled: November 17, 2017Publication date: March 15, 2018Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Publication number: 20180005026Abstract: Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by receiving one or more images depicting an object of interest and determining pixels within the one or more images corresponding to the object of interest. The systems, devices, and methods identify a position of a portion of the object of interest and determine a direction for the portion of the object of interest. Based on the direction of the portion of the object of interest, a histogram threshold is dynamically modified for identifying pixels as corresponding to the portion of the object of interest. The portion of the object of interest is replaced with a graphical interface element aligned with the direction of the portion of the object of interest.Type: ApplicationFiled: June 30, 2016Publication date: January 4, 2018Inventors: Victor Shaburov, Sergey Kotsur, Yurii Monastyrshyn, Alexander Pischenko
-
Patent number: 9852328Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.Type: GrantFiled: February 10, 2017Date of Patent: December 26, 2017Assignee: SNAP INC.Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Patent number: 9747573Abstract: Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.Type: GrantFiled: March 23, 2015Date of Patent: August 29, 2017Assignee: Avatar Merger Sub II, LLCInventors: Victor Shaburov, Yurii Monastyrshin
-
Publication number: 20170154211Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.Type: ApplicationFiled: February 10, 2017Publication date: June 1, 2017Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Patent number: 9576190Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.Type: GrantFiled: March 18, 2015Date of Patent: February 21, 2017Assignee: Snap Inc.Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Publication number: 20170019633Abstract: Methods and systems for real-time video processing can be used in video conferencing to modify image quality of background. One example method includes the steps of receiving a video including a sequence of images, identifying at least one object of interest (e.g., a face) in one or more of the images, detecting feature reference points of the at least one object of interest, and tracking the at least one object of interest in the video. The tracking may comprise aligning a virtual face mesh to the at least one object of interest in one or more of the images. Further, a background is identified in the images by separating the at least one object of interest from each image based on the virtual face mesh. The background is then modified in each of the images by blurring, changing a resolution, colors, or other parameters.Type: ApplicationFiled: January 4, 2016Publication date: January 19, 2017Inventors: Victor Shaburov, Yurii Monastyrshyn
-
Patent number: 9232189Abstract: Methods and systems for real-time video processing can be used in video conferencing to modify image quality of background. One example method includes the steps of receiving a video including a sequence of images, identifying at least one object of interest (e.g., a face) in one or more of the images, detecting feature reference points of the at least one object of interest, and tracking the at least one object of interest in the video. The tracking may comprise aligning a virtual face mesh to the at least one object of interest in one or more of the images. Further, a background is identified in the images by separating the at least one object of interest from each image based on the virtual face mesh. The background is then modified in each of the images by blurring, changing a resolution, colors, or other parameters.Type: GrantFiled: March 18, 2015Date of Patent: January 5, 2016Assignee: AVATAR MERGER SUB II, LLC.Inventors: Victor Shaburov, Yurii Monastyrshin
-
Publication number: 20150286858Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.Type: ApplicationFiled: March 18, 2015Publication date: October 8, 2015Inventors: Victor Shaburov, Yurii Monastyrshin
-
Publication number: 20150221136Abstract: Method for real time video processing for retouching an object in a video, comprising: providing an object in the video steam, the object being at least partially and at least occasionally presented in frames of the video; setting a degree of retouching; generating a list of at least one element of the object selected based on a request of retouching and the degree of retouching; detecting the at least one element of the object in the video and parameters of the at least one element; calculating new parameters of the at least one element according to the degree of retouching; detecting characteristic points for each of the at least one element of the object; generating a mesh based on the characteristic points for each of the at least one element of the object; tracking the at least one element of the object in the video, wherein the tracking comprises aligning the mesh for each of the at least one element with a position of the corresponding each of the at least one element; and transforming the frames of thType: ApplicationFiled: June 25, 2014Publication date: August 6, 2015Inventors: Elena Shaburova, Victor Shaburov
-
Publication number: 20150221338Abstract: A computer implemented method of triggering events in a video, the method comprising: providing a list of objects with their states and corresponding events in video such that each state from the list triggers at least one event of the corresponding events, wherein each object from the list has at least one state triggering at least one event of the corresponding events from the list in video; detecting at least one object from the list that at least partially and at least occasionally is presented in frames of the video; tracking the at least one object and its state; triggering at least one event of the corresponding events from the list in video in case the state of the at least one object matches with one of its states from the list.Type: ApplicationFiled: June 25, 2014Publication date: August 6, 2015Inventors: Elena Shaburova, Victor Shaburov
-
Publication number: 20150195491Abstract: Methods and systems for real-time video processing can be used in video conferencing to modify image quality of background. One example method includes the steps of receiving a video including a sequence of images, identifying at least one object of interest (e.g., a face) in one or more of the images, detecting feature reference points of the at least one object of interest, and tracking the at least one object of interest in the video. The tracking may comprise aligning a virtual face mesh to the at least one object of interest in one or more of the images. Further, a background is identified in the images by separating the at least one object of interest from each image based on the virtual face mesh. The background is then modified in each of the images by blurring, changing a resolution, colors, or other parameters.Type: ApplicationFiled: March 18, 2015Publication date: July 9, 2015Inventors: Victor Shaburov, Yurii Monastyrshin
-
Publication number: 20150193718Abstract: Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.Type: ApplicationFiled: March 23, 2015Publication date: July 9, 2015Inventors: Victor Shaburov, Yurii Monastyrshin
-
Patent number: 8689174Abstract: Methods and apparatus, including computer program products, related to extensibility of pattern components in a visual modeling language environment. A pattern component may implement an interface, the pattern component may be received (e.g., by a compiler), and a determination may be made as to whether components of the interface are implemented by the pattern component. If so, a reference to the interface is bound to the pattern component (e.g., a function call referencing a function signature of an interface may be substituted with a call to a function having the same name of the pattern component). A role may be assigned to a pattern component of a visual modeling environment of an application development system and a behavior may be performed based on the role assigned to the pattern component.Type: GrantFiled: December 28, 2006Date of Patent: April 1, 2014Assignee: SAP AGInventors: Victor Shaburov, Ulf Fildebrandt, Markus Cherdron, Vinay Nath Penmatsa, Rachel Ebner, Frank Seeger, Peter Giese
-
Patent number: 8312382Abstract: Methods and apparatus, including computer program products, for developing user interface applications using configurable patterns and for executing such pattern-based applications. The applications can be developed by generating a graphic representation of a pattern, which can include multiple pattern elements having prearranged user interface elements. The pattern can specify predefined actions that can be performed using the user interface elements, and the graphic representation can include graphic objects corresponding to the pattern elements. Application development can further include receiving user input identifying a selected graphic object and modifying the graphic representation to display information regarding the pattern element corresponding to the selected graphic object.Type: GrantFiled: May 11, 2004Date of Patent: November 13, 2012Assignee: SAP AGInventors: Yuval Gilboa, Frank Stienhans, Gennady Shumakher, Peter Giese, Victor Shaburov, Adi Kavaler, Vinay Nath Penmatsa