Patents by Inventor Sergey Tulyakov

Sergey Tulyakov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220103860
    Abstract: Systems and methods herein describe a video compression system. The described systems and methods acceses a sequence of image frames from a first computing device, the sequence of image frames comprising a first image frame and a second image frame, detects a first set of keypoints for the first image frame, transmits the first image frame and the first set of keypoints to a second computing device, detects a second set of keypoints for the second image frame, transmits the second set of keypoints to the second computing device, causes an animated image to be displayed on the second computing device.
    Type: Application
    Filed: September 30, 2021
    Publication date: March 31, 2022
    Inventors: Sergey Demyanov, Andrew Cheng-min Lin, Walton Lin, Aleksei Podkin, Aleksei Stoliar, Sergey Tulyakov
  • Publication number: 20220058880
    Abstract: A messaging system performs neural network hair rendering for images provided by users of the messaging system. A method of neural network hair rendering includes processing a three-dimensional (3D) model of fake hair and a first real hair image depicting a first person to generate a fake hair structure, and encoding, using a fake hair encoder neural subnetwork, the fake hair structure to generate a coded fake hair structure. The method further includes processing, using a cross-domain structure embedding neural subnetwork, the coded fake hair structure to generate a fake and real hair structure, and encoding, using an appearance encoder neural subnetwork, a second real hair image depicting a second person having a second head to generate an appearance map. The method further includes processing, using a real appearance renderer neural subnetwork, the appearance map and the fake and real hair structure to generate a synthesized real image.
    Type: Application
    Filed: August 20, 2021
    Publication date: February 24, 2022
    Inventors: Artem Bondich, Menglei Chai, Oleksandr Pyshchenko, Jian Ren, Sergey Tulyakov
  • Publication number: 20210407163
    Abstract: Systems and methods herein describe novel motion representations for animating articulated objects consisting of distinct parts. The described systems and method access source image data, identify driving image data to modify image feature data in the source image sequence data, generate, using an image transformation neural network, modified source image data comprising a plurality of modified source images depicting modified versions of the image feature data, the image transformation neural network being trained to identify, for each image in the source image data, a driving image from the driving image data, the identified driving image being implemented by the image transformation neural network to modify a corresponding source image in the source image data using motion estimation differences between the identified driving image and the corresponding source image, and stores the modified source image data.
    Type: Application
    Filed: June 30, 2021
    Publication date: December 30, 2021
    Inventors: Menglei Chai, Jian Ren, Aliaksandr Siarohin, Sergey Tulyakov, Oliver Woodford
  • Publication number: 20210311618
    Abstract: A system of machine learning schemes can be configured to efficiently perform image processing tasks on a user device, such as a mobile phone. The system can selectively detect and transform individual regions within each frame of a live streaming video. The system can selectively partition and toggle image effects within the live streaming video.
    Type: Application
    Filed: June 22, 2021
    Publication date: October 7, 2021
    Inventors: Theresa Barton, Yanping Chen, Jaewook Chung, Christopher Yale Crutchfield, Aymeric Damien, Sergei Kotcur, Igor Kudriashov, Sergey Tulyakov, Andrew Wan, Emre Yamangil
  • Publication number: 20210295020
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for synthesizing a realistic image with a new expression of a face in an input image by receiving an input image comprising a face having a first expression; obtaining a target expression for the face; and extracting a texture of the face and a shape of the face. The program and method for generating, based on the extracted texture of the face, a target texture corresponding to the obtained target expression using a first machine learning technique; generating, based on the extracted shape of the face, a target shape corresponding to the obtained target expression using a second machine learning technique; and combining the generated target texture and generated target shape into an output image comprising the face having a second expression corresponding to the obtained target expression.
    Type: Application
    Filed: June 9, 2021
    Publication date: September 23, 2021
    Inventors: Chen Cao, Sergey Tulyakov, Zhenglin Geng
  • Patent number: 11068141
    Abstract: A system of machine learning schemes can be configured to efficiently perform image processing tasks on a user device, such as a mobile phone. The system can selectively detect and transform individual regions within each frame of a live streaming video. The system can selectively partition and toggle image effects within the live streaming video.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: July 20, 2021
    Assignee: Snap Inc.
    Inventors: Theresa Barton, Yanping Chen, Jaewook Chung, Christopher Yale Crutchfield, Aymeric Damien, Sergei Kotcur, Igor Kudriashov, Sergey Tulyakov, Andrew Wan, Emre Yamangil
  • Patent number: 11055514
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for synthesizing a realistic image with a new expression of a face in an input image by receiving an input image comprising a face having a first expression; obtaining a target expression for the face; and extracting a texture of the face and a shape of the face. The program and method for generating, based on the extracted texture of the face, a target texture corresponding to the obtained target expression using a first machine learning technique; generating, based on the extracted shape of the face, a target shape corresponding to the obtained target expression using a second machine learning technique; and combining the generated target texture and generated target shape into an output image comprising the face having a second expression corresponding to the obtained target expression.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: July 6, 2021
    Assignee: Snap Inc.
    Inventors: Chen Cao, Sergey Tulyakov, Zhenglin Geng
  • Publication number: 20210192198
    Abstract: A landmark detection system can more accurately detect landmarks in images using a detection scheme that penalizes for dispersion parameters, such as variance or scale. The landmark detection system can be trained using both labeled and unlabeled training data in a semi-supervised approach. The landmark detection system can further implement tracking of an object across multiple images using landmark data.
    Type: Application
    Filed: December 30, 2020
    Publication date: June 24, 2021
    Inventors: Sergey Tulyakov, Roman Furko, Aleksei Stoliar
  • Publication number: 20210182624
    Abstract: A compact generative neural network can be distilled from a teacher generative neural network using a training network. The compact network can be trained on the input data and output data of the teacher network. The training network train the student network using a discrimination layer and one or more types of losses, such as perception loss and adversarial loss.
    Type: Application
    Filed: March 2, 2021
    Publication date: June 17, 2021
    Inventors: Sergey Tulyakov, Sergei Korolev, Aleksei Stoliar, Maksim Gusarov, Sergei Kotcur, Christopher Yale Crutchfield, Andrew Wan
  • Patent number: 10963748
    Abstract: A compact generative neural network can be distilled from a teacher generative neural network using a training network. The compact network can be trained on the input data and output data of the teacher network. The training network train the student network using a discrimination layer and one or more types of losses, such as perception loss and adversarial loss.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: March 30, 2021
    Assignee: Snap Inc.
    Inventors: Sergey Tulyakov, Sergei Korolev, Aleksei Stoliar, Maksim Gusarov, Sergei Kotcur, Christopher Yale Crutchfield, Andrew Wan
  • Patent number: 10909357
    Abstract: A landmark detection system can more accurately detect landmarks in images using a detection scheme that penalizes for dispersion parameters, such as variance or scale. The landmark detection system can be trained using both labeled and unlabeled training data in a semi-supervised approach. The landmark detection system can further implement tracking of an object across multiple images using landmark data.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: February 2, 2021
    Assignee: Snap Inc.
    Inventors: Sergey Tulyakov, Roman Furko, Aleksei Stoliar
  • Publication number: 20200204822
    Abstract: A method, computer readable medium, and system are disclosed for action video generation. The method includes the steps of generating, by a recurrent neural network, a sequence of motion vectors from a first set of random variables and receiving, by a generator neural network, the sequence of motion vectors and a content vector sample. The sequence of motion vectors and the content vector sample are sampled by the generator neural network to produce a video clip.
    Type: Application
    Filed: March 6, 2020
    Publication date: June 25, 2020
    Inventors: Ming-Yu Liu, Xiaodong Yang, Jan Kautz, Sergey Tulyakov
  • Patent number: 10595039
    Abstract: A method, computer readable medium, and system are disclosed for action video generation. The method includes the steps of generating, by a recurrent neural network, a sequence of motion vectors from a first set of random variables and receiving, by a generator neural network, the sequence of motion vectors and a content vector sample. The sequence of motion vectors and the content vector sample are sampled by the generator neural network to produce a video clip.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: March 17, 2020
    Assignee: NVIDIA Corporation
    Inventors: Ming-Yu Liu, Xiaodong Yang, Jan Kautz, Sergey Tulyakov
  • Patent number: 10335045
    Abstract: Recent studies in computer vision have shown that, while practically invisible to a human observer, skin color changes due to blood flow can be captured on face videos and, surprisingly, be used to estimate the heart rate (HR). While considerable progress has been made in the last few years, still many issues remain open. In particular, state-of-the-art approaches are not robust enough to operate in natural conditions (e.g. in case of spontaneous movements, facial expressions, or illumination changes). Opposite to previous approaches that estimate the HR by processing all the skin pixels inside a fixed region of interest, we introduce a strategy to dynamically select face regions useful for robust HR estimation. The present approach, inspired by recent advances on matrix completion theory, allows us to predict the HR while simultaneously discover the best regions of the face to be used for estimation.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: July 2, 2019
    Assignees: Universita degli Studi Di Trento, Fondazione Bruno Kessler, The Research Foundation for the State University of New York, University of Pittsburgh of the Commonwealth of Higher Education
    Inventors: Niculae Sebe, Xavier Alameda-Pineda, Sergey Tulyakov, Elisa Ricci, Lijun Yin, Jeffrey F. Cohn
  • Publication number: 20180288431
    Abstract: A method, computer readable medium, and system are disclosed for action video generation. The method includes the steps of generating, by a recurrent neural network, a sequence of motion vectors from a first set of random variables and receiving, by a generator neural network, the sequence of motion vectors and a content vector sample. The sequence of motion vectors and the content vector sample are sampled by the generator neural network to produce a video clip.
    Type: Application
    Filed: March 28, 2018
    Publication date: October 4, 2018
    Inventors: Ming-Yu Liu, Xiaodong Yang, Jan Kautz, Sergey Tulyakov
  • Publication number: 20170367590
    Abstract: Recent studies in computer vision have shown that, while practically invisible to a human observer, skin color changes due to blood flow can be captured on face videos and, surprisingly, be used to estimate the heart rate (HR). While considerable progress has been made in the last few years, still many issues remain open. In particular, state-of-the-art approaches are not robust enough to operate in natural conditions (e.g. in case of spontaneous movements, facial expressions, or illumination changes). Opposite to previous approaches that estimate the HR by processing all the skin pixels inside a fixed region of interest, we introduce a strategy to dynamically select face regions useful for robust HR estimation. The present approach, inspired by recent advances on matrix completion theory, allows us to predict the HR while simultaneously discover the best regions of the face to be used for estimation.
    Type: Application
    Filed: June 23, 2017
    Publication date: December 28, 2017
    Inventors: Niculae Sebe, Xavier Alameda-Pineda, Sergey Tulyakov, Elisa Ricci, Lijun Yin, Jeffrey F. Cohn
  • Patent number: 8005277
    Abstract: A method and apparatus for obtaining, hashing, storing and using fingerprint data related to fingerprint minutia including the steps of: a) determining minutia points within a fingerprint, b) determining a plurality of sets of proximate determined minutia points, c) subjecting a plurality of representations of the determined sets of minutia points to a hashing function, and d) storing or comparing resulting hashed values for fingerprint matching.
    Type: Grant
    Filed: March 2, 2007
    Date of Patent: August 23, 2011
    Assignee: Research Foundation-State University of NY
    Inventors: Sergey Tulyakov, Faisal Farooq, Sharat Chikkerur, Venu Govindaraju
  • Publication number: 20070253608
    Abstract: A method and apparatus for obtaining, hashing, storing and using fingerprint data related to fingerprint minutia including the steps of: a) determining minutia points within a fingerprint, b) determining a plurality of sets of proximate determined minutia points, c) subjecting a plurality of representations of the determined sets of minutia points to a hashing function, and d) storing or comparing resulting hashed values for fingerprint matching.
    Type: Application
    Filed: March 2, 2007
    Publication date: November 1, 2007
    Applicant: The Research Foundation of State University of New York STOR Intellectual Property Division
    Inventors: Sergey Tulyakov, Faisal Farooq, Sharat Chikkerur, Venu Govindaraju