Patents by Inventor Eric Erkon Hsin

Eric Erkon Hsin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11430102
    Abstract: A content analyzer determines whether various types of modification have been made to images. The content analyzer computes JPEG ghosts from the images that are concatenated with the image channels to generate a feature vector. The feature vector is provided as input to a neural network that determines whether the types of modification have been made to the image. The neural network may include a constrained convolution layer and several unconstrained convolution layers. An image fake model may also be applied to determine whether the image was generated using a computer model or algorithm.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: August 30, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Brian Dolhansky, Cristian Canton Ferrer, Eric Erkon Hsin
  • Publication number: 20210141926
    Abstract: In one embodiment, a method includes accessing a first machine-learning model trained to generate a feature representation of an input data, a second machine-learning model trained to generate a desired result based on the feature representation, and a third machine-learning model trained to generate an undesired result based on the feature representation, and training a fourth machine-learning model by generating a secured feature representation by processing a first output of the first machine-learning model using the fourth machine-learning model, generating a second output and a third output by processing the secured feature representation using, respectively, the second and third machine-learning models, and updating the fourth machine-learning model according to an optimization function configured to optimize a correctness of the second output and an incorrectness of the third output.
    Type: Application
    Filed: February 13, 2020
    Publication date: May 13, 2021
    Inventors: Cristian Canton Ferrer, Brian Dolhansky, Hao Guo, Eric Erkon Hsin, Phong Dinh
  • Patent number: 10915663
    Abstract: Systems, methods, and non-transitory computer-readable media can be configured to train a featurizer based at least in part on a set of training data. The featurizer can be applied to at least one input to generate at least one tensor. The at least one tensor obfuscates or excludes at least one feature in the at least one input.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: February 9, 2021
    Assignee: Facebook, Inc.
    Inventors: Cristian Canton Ferrer, Brian Dolhansky, Phong Dinh, Bryan Wu, Zhen Ling Tsai, Eric Erkon Hsin
  • Patent number: 10810725
    Abstract: A content analyzer determines whether various types of modification have been made to images. The content analyzer computes JPEG ghosts from the images that are concatenated with the image channels to generate a feature vector. The feature vector is provided as input to a neural network that determines whether the types of modification have been made to the image. The neural network may include a constrained convolution layer and several unconstrained convolution layers. An image fake model may also be applied to determine whether the image was generated using a computer model or algorithm.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: October 20, 2020
    Assignee: Facebook, Inc.
    Inventors: Brian Dolhansky, Cristian Canton Ferrer, Eric Erkon Hsin
  • Patent number: 10275856
    Abstract: In one embodiment, a method includes receiving at least two images captured by one or more cameras, wherein a first image of the at least two images has a subject and a second image of the at least two images comprises a perspective of the geographic location that is different than the first image; identifying an object that is common to the at least two images; computing a difference in perspective between the images that is based on a difference in size and shape between the object in the first image and the object in the second image; generating, based on the difference in perspective, an animation of a transition from the first image to the second image, wherein the animation comprises both the first image and the second image, and wherein the animation adds a modified version of the subject to the second image.
    Type: Grant
    Filed: August 3, 2017
    Date of Patent: April 30, 2019
    Assignee: Facebook, Inc.
    Inventors: Alexis Hope Gottlieb, Daniel Joshua Steinbock, Siyin Yang, Clark Scheff, Sridhar Rao, Alexander Charles Granieri, Francislav Penov, Upendra Shardanand, Eric Erkon Hsin
  • Publication number: 20190043166
    Abstract: In one embodiment, a method includes receiving at least two images captured by one or more cameras, wherein a first image of the at least two images has a subject and a second image of the at least two images comprises a perspective of the geographic location that is different than the first image; identifying an object that is common to the at least two images; computing a difference in perspective between the images that is based on a difference in size and shape between the object in the first image and the object in the second image; generating, based on the difference in perspective, an animation of a transition from the first image to the second image, wherein the animation comprises both the first image and the second image, and wherein the animation adds a modified version of the subject to the second image.
    Type: Application
    Filed: August 3, 2017
    Publication date: February 7, 2019
    Inventors: Alexis Hope Gottlieb, Daniel Joshua Steinbock, Siyin Yang, Clark Scheff, Sridhar Rao, Alexander Charles Granieri, Francislav Penov, Upendra Shardanand, Eric Erkon Hsin
  • Publication number: 20190043241
    Abstract: In one embodiment, a method includes receiving an image from a client system associated with a user of an online social network; detecting that a content item depicted in the image is located within a media space; selecting an animation template from a plurality of animations to apply to the image, wherein the selection of the animation is based on the detected content item or the media space; generating an animation based on the selected animation template and an image of the user; and sending, to the client system, instructions to display the animation on the client system associated with the user.
    Type: Application
    Filed: August 3, 2017
    Publication date: February 7, 2019
    Inventors: Clark Scheff, Daniel Steinbock, Siyin Yang, Alexander Charles Granieri, Sridhar Rao, Upendra Shardanand, Eric Erkon Hsin