Patents by Inventor Victor Shaburov

Victor Shaburov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10255488
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: April 9, 2019
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 10235562
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: March 19, 2019
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 10116901
    Abstract: Methods and systems for real-time video processing can be used in video conferencing to modify image quality of background. One example method includes the steps of receiving a video including a sequence of images, identifying at least one object of interest (e.g., a face) in one or more of the images, detecting feature reference points of the at least one object of interest, and tracking the at least one object of interest in the video. The tracking may comprise aligning a virtual face mesh to the at least one object of interest in one or more of the images. Further, a background is identified in the images by separating the at least one object of interest from each image based on the virtual face mesh. The background is then modified in each of the images by blurring, changing a resolution, colors, or other parameters.
    Type: Grant
    Filed: January 4, 2016
    Date of Patent: October 30, 2018
    Assignee: Avatar Merger Sub II, LLC
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 10102423
    Abstract: Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by receiving one or more images depicting an object of interest and determining pixels within the one or more images corresponding to the object of interest. The systems, devices, and methods identify a position of a portion of the object of interest and determine a direction for the portion of the object of interest. Based on the direction of the portion of the object of interest, a histogram threshold is dynamically modified for identifying pixels as corresponding to the portion of the object of interest. The portion of the object of interest is replaced with a graphical interface element aligned with the direction of the portion of the object of interest.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: October 16, 2018
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn, Oleksandr Pyshchenko, Sergei Kotcur
  • Publication number: 20180253901
    Abstract: A context based augmented reality system can be used to display augmented reality elements over a live video feed on a client device. The augmented reality elements can be selected based on a number of context inputs generated by the client device. The context inputs can include location data of the client device and location data of nearby physical places that have preconfigured augmented elements. The preconfigured augmented elements can be preconfigured to exhibit a design scheme of the corresponding physical place.
    Type: Application
    Filed: July 19, 2017
    Publication date: September 6, 2018
    Inventors: Ebony James Charlton, Jokubas Dargis, Eitan Pilipski, Dhritiman Sagar, Victor Shaburov
  • Publication number: 20180075292
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Application
    Filed: November 17, 2017
    Publication date: March 15, 2018
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Publication number: 20180005026
    Abstract: Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by receiving one or more images depicting an object of interest and determining pixels within the one or more images corresponding to the object of interest. The systems, devices, and methods identify a position of a portion of the object of interest and determine a direction for the portion of the object of interest. Based on the direction of the portion of the object of interest, a histogram threshold is dynamically modified for identifying pixels as corresponding to the portion of the object of interest. The portion of the object of interest is replaced with a graphical interface element aligned with the direction of the portion of the object of interest.
    Type: Application
    Filed: June 30, 2016
    Publication date: January 4, 2018
    Inventors: Victor Shaburov, Sergey Kotsur, Yurii Monastyrshyn, Alexander Pischenko
  • Patent number: 9852328
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: December 26, 2017
    Assignee: SNAP INC.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 9747573
    Abstract: Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.
    Type: Grant
    Filed: March 23, 2015
    Date of Patent: August 29, 2017
    Assignee: Avatar Merger Sub II, LLC
    Inventors: Victor Shaburov, Yurii Monastyrshin
  • Publication number: 20170154211
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Application
    Filed: February 10, 2017
    Publication date: June 1, 2017
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 9576190
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Grant
    Filed: March 18, 2015
    Date of Patent: February 21, 2017
    Assignee: Snap Inc.
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Publication number: 20170019633
    Abstract: Methods and systems for real-time video processing can be used in video conferencing to modify image quality of background. One example method includes the steps of receiving a video including a sequence of images, identifying at least one object of interest (e.g., a face) in one or more of the images, detecting feature reference points of the at least one object of interest, and tracking the at least one object of interest in the video. The tracking may comprise aligning a virtual face mesh to the at least one object of interest in one or more of the images. Further, a background is identified in the images by separating the at least one object of interest from each image based on the virtual face mesh. The background is then modified in each of the images by blurring, changing a resolution, colors, or other parameters.
    Type: Application
    Filed: January 4, 2016
    Publication date: January 19, 2017
    Inventors: Victor Shaburov, Yurii Monastyrshyn
  • Patent number: 9232189
    Abstract: Methods and systems for real-time video processing can be used in video conferencing to modify image quality of background. One example method includes the steps of receiving a video including a sequence of images, identifying at least one object of interest (e.g., a face) in one or more of the images, detecting feature reference points of the at least one object of interest, and tracking the at least one object of interest in the video. The tracking may comprise aligning a virtual face mesh to the at least one object of interest in one or more of the images. Further, a background is identified in the images by separating the at least one object of interest from each image based on the virtual face mesh. The background is then modified in each of the images by blurring, changing a resolution, colors, or other parameters.
    Type: Grant
    Filed: March 18, 2015
    Date of Patent: January 5, 2016
    Assignee: AVATAR MERGER SUB II, LLC.
    Inventors: Victor Shaburov, Yurii Monastyrshin
  • Publication number: 20150286858
    Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
    Type: Application
    Filed: March 18, 2015
    Publication date: October 8, 2015
    Inventors: Victor Shaburov, Yurii Monastyrshin
  • Publication number: 20150221136
    Abstract: Method for real time video processing for retouching an object in a video, comprising: providing an object in the video steam, the object being at least partially and at least occasionally presented in frames of the video; setting a degree of retouching; generating a list of at least one element of the object selected based on a request of retouching and the degree of retouching; detecting the at least one element of the object in the video and parameters of the at least one element; calculating new parameters of the at least one element according to the degree of retouching; detecting characteristic points for each of the at least one element of the object; generating a mesh based on the characteristic points for each of the at least one element of the object; tracking the at least one element of the object in the video, wherein the tracking comprises aligning the mesh for each of the at least one element with a position of the corresponding each of the at least one element; and transforming the frames of th
    Type: Application
    Filed: June 25, 2014
    Publication date: August 6, 2015
    Inventors: Elena Shaburova, Victor Shaburov
  • Publication number: 20150221338
    Abstract: A computer implemented method of triggering events in a video, the method comprising: providing a list of objects with their states and corresponding events in video such that each state from the list triggers at least one event of the corresponding events, wherein each object from the list has at least one state triggering at least one event of the corresponding events from the list in video; detecting at least one object from the list that at least partially and at least occasionally is presented in frames of the video; tracking the at least one object and its state; triggering at least one event of the corresponding events from the list in video in case the state of the at least one object matches with one of its states from the list.
    Type: Application
    Filed: June 25, 2014
    Publication date: August 6, 2015
    Inventors: Elena Shaburova, Victor Shaburov
  • Publication number: 20150195491
    Abstract: Methods and systems for real-time video processing can be used in video conferencing to modify image quality of background. One example method includes the steps of receiving a video including a sequence of images, identifying at least one object of interest (e.g., a face) in one or more of the images, detecting feature reference points of the at least one object of interest, and tracking the at least one object of interest in the video. The tracking may comprise aligning a virtual face mesh to the at least one object of interest in one or more of the images. Further, a background is identified in the images by separating the at least one object of interest from each image based on the virtual face mesh. The background is then modified in each of the images by blurring, changing a resolution, colors, or other parameters.
    Type: Application
    Filed: March 18, 2015
    Publication date: July 9, 2015
    Inventors: Victor Shaburov, Yurii Monastyrshin
  • Publication number: 20150193718
    Abstract: Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.
    Type: Application
    Filed: March 23, 2015
    Publication date: July 9, 2015
    Inventors: Victor Shaburov, Yurii Monastyrshin
  • Patent number: 8689174
    Abstract: Methods and apparatus, including computer program products, related to extensibility of pattern components in a visual modeling language environment. A pattern component may implement an interface, the pattern component may be received (e.g., by a compiler), and a determination may be made as to whether components of the interface are implemented by the pattern component. If so, a reference to the interface is bound to the pattern component (e.g., a function call referencing a function signature of an interface may be substituted with a call to a function having the same name of the pattern component). A role may be assigned to a pattern component of a visual modeling environment of an application development system and a behavior may be performed based on the role assigned to the pattern component.
    Type: Grant
    Filed: December 28, 2006
    Date of Patent: April 1, 2014
    Assignee: SAP AG
    Inventors: Victor Shaburov, Ulf Fildebrandt, Markus Cherdron, Vinay Nath Penmatsa, Rachel Ebner, Frank Seeger, Peter Giese
  • Patent number: 8312382
    Abstract: Methods and apparatus, including computer program products, for developing user interface applications using configurable patterns and for executing such pattern-based applications. The applications can be developed by generating a graphic representation of a pattern, which can include multiple pattern elements having prearranged user interface elements. The pattern can specify predefined actions that can be performed using the user interface elements, and the graphic representation can include graphic objects corresponding to the pattern elements. Application development can further include receiving user input identifying a selected graphic object and modifying the graphic representation to display information regarding the pattern element corresponding to the selected graphic object.
    Type: Grant
    Filed: May 11, 2004
    Date of Patent: November 13, 2012
    Assignee: SAP AG
    Inventors: Yuval Gilboa, Frank Stienhans, Gennady Shumakher, Peter Giese, Victor Shaburov, Adi Kavaler, Vinay Nath Penmatsa