Patents by Inventor Kihyuk Sohn

Kihyuk Sohn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180307947
    Abstract: A system is provided for unsupervised cross-domain image generation relative to a first and second image domain that each include real images. A first generator generates synthetic images similar to real images in the second domain while including a semantic content of real images in the first domain. A second generator generates synthetic images similar to real images in the first domain while including a semantic content of real images in the second domain. A first discriminator discriminates real images in the first domain against synthetic images generated by the second generator. A second discriminator discriminates real images in the second domain against synthetic images generated by the first generator. The discriminators and generators are deep neural networks and respectively form a generative network and a discriminative network in a cyclic GAN framework configured to increase an error rate of the discriminative network to improve synthetic image quality.
    Type: Application
    Filed: February 27, 2018
    Publication date: October 25, 2018
    Inventors: Wongun Choi, Samuel Schulter, Kihyuk Sohn, Manmohan Chandraker
  • Publication number: 20180268201
    Abstract: A face recognition system is provided. The system includes a device configured to capture an input image of a subject. The system further includes a processor. The processor estimates, using a 3D Morphable Model (3DMM) conditioned Generative Adversarial Network, 3DMM coefficients for the subject of the input image. The subject varies from an ideal front pose. The processor produces, using an image generator, a synthetic frontal face image of the subject of the input image based on the input image and the 3DMM coefficients. An area spanning the frontal face of the subject is made larger in the synthetic image than in the input image. The processor provides, using a discriminator, a decision indicative of whether the subject of the synthetic image is an actual person. The processor provides, using a face recognition engine, an identity of the subject in the input image based on the synthetic and input images.
    Type: Application
    Filed: February 5, 2018
    Publication date: September 20, 2018
    Inventors: Xiang Yu, Kihyuk Sohn, Manmohan Chandraker
  • Publication number: 20180268055
    Abstract: A video retrieval system is provided that includes a server for retrieving video sequences from a remote database responsive to a text specifying a face recognition result as an identity of a subject of an input image. The face recognition result is determined by a processor of the server, which estimates, using a 3DMM conditioned Generative Adversarial Network, 3DMM coefficients for the subject of the input image. The subject varies from an ideal front pose. The processor produces a synthetic frontal face image of the subject of the input image based on the input image and coefficients. An area spanning the frontal face of the subject is made larger in the synthetic than in the input image. The processor provides a decision of whether the synthetic image subject is an actual person and provides the identity of the subject in the input image based on the synthetic and input images.
    Type: Application
    Filed: February 5, 2018
    Publication date: September 20, 2018
    Inventors: Xiang Yu, Kihyuk Sohn, Manmohan Chandraker
  • Publication number: 20180268265
    Abstract: An object recognition system is provided that includes a device configured to capture a video sequence formed from unlabeled testing video frames. The system includes a processor configured to pre-train a recognition engine formed from a reference set of CNNs on a still image domain that includes labeled training still image frames. The processor adapts the recognition engine to a video domain to form an adapted recognition engine, by applying a non-reference set of CNNs to a set of domains that include the still image and video domains and a degraded image domain. The degraded image domain includes labeled synthetically degraded versions of the labeled training still image frames included in the still image domain. The video domain includes random unlabeled training video frames. The processor recognizes, using the adapted engine, a set of objects in the video sequence. A display device displays the set of recognized objects.
    Type: Application
    Filed: February 6, 2018
    Publication date: September 20, 2018
    Inventors: Kihyuk Sohn, Xiang Yu, Manmohan Chandraker
  • Publication number: 20180268266
    Abstract: A surveillance system is provided that includes a device configured to capture a video sequence, formed from a set of unlabeled testing video frames, of a target area. The surveillance system further includes a processor configured to pre-train a recognition engine formed from a reference set of CNNs on a still image domain that includes labeled training still image frames. The processor adapts the recognition engine to a video domain to form an adapted recognition engine, by applying a non-reference set of CNNs to domains including the still image and video domains and a degraded image domain. The degraded image domain includes labeled synthetically degraded versions of the frames included in the still image domain. The video domain includes random unlabeled training video frames. The processor recognizes, using the adapted engine, at least one object in the target area. A display device displays the recognized objects.
    Type: Application
    Filed: February 6, 2018
    Publication date: September 20, 2018
    Inventors: Kihyuk Sohn, Xiang Yu, Manmohan Chandraker
  • Publication number: 20180268202
    Abstract: A video surveillance system is provided. The system includes a device configured to capture an input image of a subject located in an area. The system further includes a processor. The processor estimates, using a three-dimensional Morphable Model (3DMM) conditioned Generative Adversarial Network, 3DMM coefficients for the subject of the input image. The subject varies from an ideal front pose. The processor produces, using an image generator, a synthetic frontal face image of the subject of the input image based on the input image and coefficients. An area spanning the frontal face of the subject is made larger in the synthetic than in the input image. The processor provides, using a discriminator, a decision of whether the subject of the synthetic image is an actual person. The processor provides, using a face recognition engine, an identity of the subject in the input image based on the synthetic and input images.
    Type: Application
    Filed: February 5, 2018
    Publication date: September 20, 2018
    Inventors: Xiang Yu, Kihyuk Sohn, Manmohan Chandraker
  • Publication number: 20180268203
    Abstract: A face recognition system is provided that includes a device configured to capture a video sequence formed from a set of unlabeled testing video frames. The system includes a processor configured to pre-train a face recognition engine formed from reference CNNs on a still image domain that includes labeled training still image frames of faces. The processor adapts the face recognition engine to a video domain to form an adapted engine, by applying non-reference CNNs to domains including the still image and video domains and a degraded image domain. The degraded image domain includes labeled synthetically degraded versions of the frames included in the still image domain. The video domain includes random unlabeled training video frames. The processor recognizes, using the adapted engine, identities of persons corresponding to at least one face in the video sequence to obtain a set of identities. A display device displays the set of identities.
    Type: Application
    Filed: February 6, 2018
    Publication date: September 20, 2018
    Inventors: Kihyuk Sohn, Xiang Yu, Manmohan Chandraker
  • Publication number: 20180268222
    Abstract: An action recognition system is provided that includes a device configured to capture a video sequence formed from a set of unlabeled testing video frames. The system further includes a processor configured to pre-train a recognition engine formed from a reference set of CNNs on a still image domain that includes labeled training still image frames. The processor adapts the recognition engine to a video domain to form an adapted engine, by applying non-reference CNNs to domains that include the still image and video domains and a degraded image domain that includes labeled synthetically degraded versions of the frames in the still image domain. The video domain includes random unlabeled training video frames. The processor recognizes, using the adapted engine, an action performed by at least one object in the sequence, and controls a device to perform a response action in response to an action type of the action.
    Type: Application
    Filed: February 6, 2018
    Publication date: September 20, 2018
    Inventors: Kihyuk Sohn, Xiang Yu, Manmohan Chandraker
  • Publication number: 20180129869
    Abstract: A computer-implemented method, system, and computer program product is provided for pose-invariant facial recognition. The method includes generating, by a processor using a recognition neural network, a rich feature embedding for identity information and non-identity information for each of one or more images. The method also includes generating, by the processor using a Siamese reconstruction network, one or more pose-invariant features by employing the rich feature embedding for identity information and non-identity information. The method additionally includes identifying, by the processor, a user by employing the one or more pose-invariant features. The method further includes controlling an operation of a processor-based machine to change a state of the processor-based machine, responsive to the identified user in the one or more images.
    Type: Application
    Filed: November 3, 2017
    Publication date: May 10, 2018
    Inventors: Xiang Yu, Kihyuk Sohn, Manmohan Chandraker, Xi Peng
  • Publication number: 20180130324
    Abstract: A computer-implemented method, system, and computer program product is provided for video security. The method includes monitoring an area with a camera. The method also includes capturing, by the camera, live video to provide a live video stream. The method additionally includes detecting and identifying, by a processor using a recognition neural network feeding into a Siamese reconstruction network, a user in the live video stream by employing one or more pose-invariant features. The method further includes controlling, by the processor, an operation of a processor-based machine to change a state of the processor-based machine, responsive to the identified user in the live video stream.
    Type: Application
    Filed: November 3, 2017
    Publication date: May 10, 2018
    Inventors: Xiang Yu, Kihyuk Sohn, Manmohan Chandraker
  • Publication number: 20170228641
    Abstract: A method includes receiving N pairs of training examples and class labels therefor. Each pair includes a respective anchor example, and a respective non-anchor example capable of being a positive or a negative training example. The method further includes extracting features of the pairs by applying a DHCNN, and calculating, for each pair based on the features, a respective similarly measure between the respective anchor and no example. The method additionally includes calculating a similarity score based on the respective similarity measure for each pair. The score represents similarities between all anchor points and positive training examples in the pairs relative to similarities between all anchor points and negative training examples in the pairs. The method further includes maximizing the similarity score for the anchor example for each pair to pull together the training examples from a same class while pushing apart the training examples from different classes.
    Type: Application
    Filed: December 20, 2016
    Publication date: August 10, 2017
    Inventor: Kihyuk Sohn