Patents by Inventor Evgeny Zatepyakin

Evgeny Zatepyakin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11113859
    Abstract: Disclosed herein includes a system, a method, and a non-transitory computer readable medium for rendering a three-dimensional (3D) model of an avatar according to an audio stream including a vocal output of a person and image data capturing a face of the person. In one aspect, phonemes of the vocal output are predicted according to the audio stream, and the predicted phonemes of the vocal output are translated into visemes. In one aspect, a plurality of blendshapes and corresponding weights are determined, according to the corresponding image data of the face, to form the 3D model of the avatar of the person. The visemes may be combined with the 3D model of the avatar to form a 3D representation of the avatar, by synchronizing the visemes with the 3D model of the avatar in time.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: September 7, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Tong Xiao, Sidi Fu, Mengqian Liu, Peihong Guo, Shu Liang, Evgeny Zatepyakin
  • Patent number: 10810779
    Abstract: Exemplary embodiments relate to the application of media effects such as facial mask overlays, to visual data (such as a video or photo). Publicly-available images may be found and mapped to a mask. In the mapping process, a user may type in the name of a celebrity or public figure, and a system may perform a public image search. In some embodiments, candidate images may be filtered in order to remove images unsuitable for use in masks. Typically, only a single forward-facing image is required for mapping. However, multiple images may be used to provide different angles and allow the user to turn their head while the mask is applied. Mask generation may involve: extracting facial features from the image; mapping the facial features to the user's video; blending/recoloring of either or both of the image or the person's face; and applying the mask in real-time/on the fly.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: October 20, 2020
    Assignee: FACEBOOK, INC.
    Inventors: Evgeny Zatepyakin, Yauheni Neuhen
  • Patent number: 10599916
    Abstract: Exemplary embodiments relate to applications for facial recognition technology and facial overlays to provide gesture-based music track generation. Facial detection technology may be used to analyze a video, to detect a face, and to track the face as a whole (and/or individual features of the face). The features may include, e.g., the locations of the mouth, direction of the eyes, whether the user is blinking, the location of the head in three dimensional space, the movement of the head, etc. Expressions and emotions may also be tracked. Features/expressions/emotions meeting certain conditions may trigger an event, where events may cause a predetermined musical element to play (e.g., drum beat, piano note, guitar chord, etc.). The sum total of the musical elements played may result in the creation of a musical track. The application of events may be balanced based on musical metrics in order to provide a fluent sound.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: March 24, 2020
    Assignee: FACEBOOK, INC.
    Inventors: Evgeny Zatepyakin, Yauheni Neuhen
  • Patent number: 10332312
    Abstract: A face tracking system generates a model for extracting a set of facial anchor points on a face within a portion of a face image based a multiple-level cascade of decision trees. The face tracking system identifies a mesh shape adjusted to an image of a face. For each decision tree, the face tracking system identifies an adjustment vector for the mesh shape relative to the image of the face. For each cascade level, the face tracking system combines the identified adjustment for each decision tree to determine a combined adjustment vector for the cascade level. The face tracking system modifies adjustment of the mesh shape to the face in the image based on the combined adjustment vector. The face tracking system reduces the model to a dictionary and atom weights using a learned dictionary. The model may be more easily transmitted to devices and stored on devices.
    Type: Grant
    Filed: December 25, 2016
    Date of Patent: June 25, 2019
    Assignee: Facebook, Inc.
    Inventor: Evgeny Zatepyakin
  • Publication number: 20190180490
    Abstract: Exemplary embodiments relate to the application of media effects such as facial mask overlays, to visual data (such as a video or photo). Publicly-available images may be found and mapped to a mask. In the mapping process, a user may type in the name of a celebrity or public figure, and a system may perform a public image search. In some embodiments, candidate images may be filtered in order to remove images unsuitable for use in masks. Typically, only a single forward-facing image is required for mapping. However, multiple images may be used to provide different angles and allow the user to turn their head while the mask is applied. Mask generation may involve: extracting facial features from the image; mapping the facial features to the user's video; blending/recoloring of either or both of the image or the person's face; and applying the mask in real-time/on the fly.
    Type: Application
    Filed: December 7, 2017
    Publication date: June 13, 2019
    Inventors: Evgeny Zatepyakin, Yauheni Neuhen
  • Publication number: 20190147229
    Abstract: Exemplary embodiments relate to applications for facial recognition technology and facial overlays to provide gesture-based music track generation. Facial detection technology may be used to analyze a video, to detect a face, and to track the face as a whole (and/or individual features of the face). The features may include, e.g., the locations of the mouth, direction of the eyes, whether the user is blinking, the location of the head in three dimensional space, the movement of the head, etc. Expressions and emotions may also be tracked. Features/expressions/emotions meeting certain conditions may trigger an event, where events may cause a predetermined musical element to play (e.g., drum beat, piano note, guitar chord, etc.). The sum total of the musical elements played may result in the creation of a musical track. The application of events may be balanced based on musical metrics in order to provide a fluent sound.
    Type: Application
    Filed: November 13, 2017
    Publication date: May 16, 2019
    Inventors: Evgeny Zatepyakin, Yauheni Neuhen
  • Publication number: 20190147841
    Abstract: Exemplary embodiments relate to applications for facial detection technology and facial overlays to provide a karaoke experience. For example, an identifier associated with a celebrity or singer may be mapped to an image or facial overlay, and to a set of predefined music tracks configured for karaoke. In some embodiments, the music tracks may include metadata with lyrics or other karaoke information. The music tracks may also be mapped to media elements, which may be interactive. The karaoke experience may be gamified, such as by performing a sound analysis to determine how close a user's performance is to the lyrics or pitch of the original singer. The song may be performed in a live video, and a leaderboard may be used to track performance across multiple users. The leaderboard score for each user may be partially based on engagement of a user base with the live broadcast.
    Type: Application
    Filed: November 13, 2017
    Publication date: May 16, 2019
    Inventors: Evgeny Zatepyakin, Yauheni Neuhen
  • Patent number: 10019651
    Abstract: A face tracking system generates a model for extracting a set of facial anchor points on a face within a portion of a face image based a multiple-level cascade of decision trees. The face tracking system identifies a mesh shape adjusted to an image of a face. For each decision tree, the face tracking system identifies an adjustment vector for the mesh shape relative to the image of the face. For each cascade level, the face tracking system combines the identified adjustment for each decision tree to determine a combined adjustment vector for the cascade level. The face tracking system modifies adjustment of the mesh shape to the face in the image based on the combined adjustment vector. The face tracking system reduces the model to a dictionary and atom weights using a learned dictionary. The model may be more easily transmitted to devices and stored on devices.
    Type: Grant
    Filed: December 25, 2016
    Date of Patent: July 10, 2018
    Assignee: Facebook, Inc.
    Inventor: Evgeny Zatepyakin
  • Publication number: 20180182165
    Abstract: A face tracking system generates a model for extracting a set of facial anchor points on a face within a portion of a face image based a multiple-level cascade of decision trees. The face tracking system identifies a mesh shape adjusted to an image of a face. For each decision tree, the face tracking system identifies an adjustment vector for the mesh shape relative to the image of the face. For each cascade level, the face tracking system combines the identified adjustment for each decision tree to determine a combined adjustment vector for the cascade level. The face tracking system modifies adjustment of the mesh shape to the face in the image based on the combined adjustment vector. The face tracking system reduces the model to a dictionary and atom weights using a learned dictionary. The model may be more easily transmitted to devices and stored on devices.
    Type: Application
    Filed: December 25, 2016
    Publication date: June 28, 2018
    Inventor: Evgeny Zatepyakin
  • Publication number: 20180181840
    Abstract: A face tracking system generates a model for extracting a set of facial anchor points on a face within a portion of a face image based a multiple-level cascade of decision trees. The face tracking system identifies a mesh shape adjusted to an image of a face. For each decision tree, the face tracking system identifies an adjustment vector for the mesh shape relative to the image of the face. For each cascade level, the face tracking system combines the identified adjustment for each decision tree to determine a combined adjustment vector for the cascade level. The face tracking system modifies adjustment of the mesh shape to the face in the image based on the combined adjustment vector. The face tracking system reduces the model to a dictionary and atom weights using a learned dictionary. The model may be more easily transmitted to devices and stored on devices.
    Type: Application
    Filed: December 25, 2016
    Publication date: June 28, 2018
    Inventor: Evgeny Zatepyakin