Patents by Inventor Manmohan Chandraker

Manmohan Chandraker has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210142046
    Abstract: A computer-implemented method for implementing face recognition includes obtaining a face recognition model trained on labeled face data, separating, using a mixture of probability distributions, a plurality of unlabeled faces corresponding to unlabeled face data into a set of one or more overlapping unlabeled faces that include overlapping identities to those in the labeled face data and a set of one or more disjoint unlabeled faces that include disjoint identities to those in the labeled face data, clustering the one or more disjoint unlabeled faces using a graph convolutional network to generate one or more cluster assignments, generating a clustering uncertainty associated with the one or more cluster assignments, and retraining the face recognition model on the labeled face data and the unlabeled face data to improve face recognition performance by incorporating the clustering uncertainty.
    Type: Application
    Filed: November 6, 2020
    Publication date: May 13, 2021
    Inventors: Xiang Yu, Manmohan Chandraker, Kihyuk Sohn, Aruni RoyChowdhury
  • Patent number: 10991145
    Abstract: A system is provided for pose-variant 3D facial attribute generation. A first stage has a hardware processor based 3D regression network for directly generating a space position map for a 3D shape and a camera perspective matrix from a single input image of a face and further having a rendering layer for rendering a partial texture map of the single input image based on the space position map and the camera perspective matrix. A second stage has a hardware processor based two-part stacked Generative Adversarial Network (GAN) including a Texture Completion GAN (TC-GAN) stacked with a 3D Attribute generation GAN (3DA-GAN). The TC-GAN completes the partial texture map to form a complete texture map based on the partial texture map and the space position map. The 3DA-GAN generates a target facial attribute for the single input image based on the complete texture map and the space position map.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: April 27, 2021
    Inventors: Xiang Yu, Feng-Ju Chang, Manmohan Chandraker
  • Publication number: 20210110209
    Abstract: Systems and methods for construction zone segmentation are provided. The system aligns image level features between a source domain and a target domain based on an adversarial learning process while training a domain discriminator. The target domain includes construction zones scenes having various objects. The system selects, using the domain discriminator, unlabeled samples from the target domain that are far away from existing annotated samples from the target domain. The system selects, based on a prediction score of each of the unlabeled samples, samples with lower prediction scores. The system annotates the samples with the lower prediction scores.
    Type: Application
    Filed: December 21, 2020
    Publication date: April 15, 2021
    Inventors: Yi-Hsuan Tsai, Kihyuk Sohn, Buyu Liu, Manmohan Chandraker, Jong-Chyi Su
  • Publication number: 20210110210
    Abstract: Systems and methods for lane marking and road sign recognition are provided. The system aligns image level features between a source domain and a target domain based on an adversarial learning process while training a domain discriminator. The target domain includes one or more road scenes having lane markings and road signs. The system selects, using the domain discriminator, unlabeled samples from the target domain that are far away from existing annotated samples from the target domain. The system selects, based on a prediction score of each of the unlabeled samples, samples with lower prediction scores. The system annotates the samples with the lower prediction scores.
    Type: Application
    Filed: December 21, 2020
    Publication date: April 15, 2021
    Inventors: Yi-Hsuan Tsai, Kihyuk Sohn, Buyu Liu, Manmohan Chandraker, Jong-Chyi Su
  • Publication number: 20210110147
    Abstract: Systems and methods for human detection are provided. The system aligns image level features between a source domain and a target domain based on an adversarial learning process while training a domain discriminator. The target domain includes humans in one or more different scenes. The system selects, using the domain discriminator, unlabeled samples from the target domain that are far away from existing annotated samples from the target domain. The system selects, based on a prediction score of each of the unlabeled samples, samples with lower prediction scores. The system annotates the samples with the lower prediction scores.
    Type: Application
    Filed: December 21, 2020
    Publication date: April 15, 2021
    Inventors: Yi-Hsuan Tsai, Kihyuk Sohn, Buyu Liu, Manmohan Chandraker, Jong-Chyi Su
  • Publication number: 20210110178
    Abstract: Systems and methods for obstacle detection are provided. The system aligns image level features between a source domain and a target domain based on an adversarial learning process while training a domain discriminator. The target domain includes one or more road scenes having obstacles. The system selects, using the domain discriminator, unlabeled samples from the target domain that are far away from existing annotated samples from the target domain. The system selects, based on a prediction score of each of the unlabeled samples, samples with lower prediction scores. The system annotates the samples with the lower prediction scores.
    Type: Application
    Filed: December 21, 2020
    Publication date: April 15, 2021
    Inventors: Yi-Hsuan Tsai, Kihyuk Sohn, Buyu Liu, Manmohan Chandraker, Jong-Chyi Su
  • Publication number: 20210065391
    Abstract: A method for improving geometry-based monocular structure from motion (SfM) by exploiting depth maps predicted by convolutional neural networks (CNNs) is presented. The method includes capturing a sequence of RGB images from an unlabeled monocular video stream obtained by a monocular camera, feeding the RGB images into a depth estimation/refinement module, outputting depth maps, feeding the depth maps and the RGB images to a pose estimation/refinement module, the depths maps and the RGB images collectively defining pseudo RGB-D images, outputting camera poses and point clouds, and constructing a 3D map of a surrounding environment displayed on a visualization device.
    Type: Application
    Filed: August 7, 2020
    Publication date: March 4, 2021
    Inventors: Quoc-Huy Tran, Pan Ji, Manmohan Chandraker, Lokender Tiwari
  • Publication number: 20210042937
    Abstract: A computer-implemented method for implementing a self-supervised visual odometry framework using long-term modeling includes, within a pose network of the self-supervised visual odometry framework including a plurality of pose encoders, a convolution long short-term memory (ConvLSTM) module having a first-layer ConvLSTM and a second-layer ConvLSTM, and a pose prediction layer, performing a first stage of training over a first image sequence using photometric loss, depth smoothness loss and pose cycle consistency loss, and performing a second stage of training to finetune the second-layer ConvLSTM over a second image sequence longer than the first image sequence.
    Type: Application
    Filed: July 27, 2020
    Publication date: February 11, 2021
    Inventors: Pan Ji, Quoc-Huy Tran, Manmohan Chandraker, Yuliang Zou
  • Patent number: 10915792
    Abstract: Systems and methods for domain adaptation are provided. The system aligns image level features between a source domain and a target domain based on an adversarial learning process while training a domain discriminator. The system selects, using the domain discriminator, unlabeled samples from the target domain that are far away from existing annotated samples from the target domain. The system selects, based on a prediction score of each of the unlabeled samples, samples with lower prediction scores. The system annotates the samples with the lower prediction scores.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: February 9, 2021
    Inventors: Yi-Hsuan Tsai, Kihyuk Sohn, Buyu Liu, Manmohan Chandraker, Jong-Chyi Su
  • Patent number: 10885383
    Abstract: A method for implementing an unsupervised cross-domain distance metric adaptation framework with a feature transfer network for enhancing facial recognition includes recursively training a feature transfer network and automatic labeling of target domain data using a clustering method, and implementing the feature transfer network and the automatic labeling to perform a facial recognition task.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: January 5, 2021
    Inventors: Kihyuk Sohn, Manmohan Chandraker, Xiang Yu
  • Patent number: 10884433
    Abstract: A computer-implemented method, system, and computer program product are provided for a stabilization system utilizing pose estimation in an aerial drone. The method includes receiving, by a pose estimation system, a plurality of images from one or more cameras. The method also includes predicting, by the pose estimation system, a pose from the score map and a combined feature map, the combined feature map correlated from a pair of the plurality of images. The method additionally includes moving, by a propulsion system, the aerial drone responsive to the pose.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: January 5, 2021
    Inventors: Quoc-Huy Tran, Manmohan Chandraker, Hyo Jin Kim
  • Patent number: 10852749
    Abstract: A computer-implemented method, system, and computer program product are provided for pose estimation. The method includes receiving, by a processor, a plurality of images from one or more cameras. The method also includes generating, by the processor with a feature extraction convolutional neural network (CNN), a feature map for each of the plurality of images. The method additionally includes estimating, by the processor with a feature weighting network, a score map from a pair of the feature maps. The method further includes predicting, by the processor with a pose estimation CNN, a pose from the score map and a combined feature map. The method also includes controlling an operation of a processor-based machine to change a state of the processor-based machine, responsive to the pose.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: December 1, 2020
    Inventors: Quoc-Huy Tran, Manmohan Chandraker, Hyo Jin Kim
  • Patent number: 10853654
    Abstract: A computer-implemented method, system, and computer program product are provided for activity recognition. The method includes receiving, by a processor, a plurality of videos, the plurality of videos including labeled videos and unlabeled videos. The method also includes extracting, by the processor with a feature extraction convolutional neural network (CNN), frame features for frames from each of the plurality of videos. The method additionally includes estimating, by the processor with a feature aggregation system, a vector representation for one of the plurality of videos responsive to the frame features. The method further includes classifying, by the processor, an activity from the vector representation. The method also includes controlling an operation of a processor-based machine to react in accordance with the activity.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: December 1, 2020
    Assignee: NEC Corporation
    Inventors: Kihyuk Sohn, Manmohan Chandraker, Xiang Yu
  • Patent number: 10853655
    Abstract: A computer-implemented method, system, and computer program product are provided for activity recognition in a mobile device. The method includes receiving a plurality of unlabeled videos from one or more cameras. The method also includes generating a classified video for each of the plurality of unlabeled videos by classifying an activity in each of the plurality of unlabeled videos. The method additionally includes storing the classified video in a location in a memory designated for videos of the activity in each of the classified videos.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: December 1, 2020
    Inventors: Kihyuk Sohn, Manmohan Chandraker, Xiang Yu
  • Patent number: 10853627
    Abstract: A computer-implemented method, system, and computer program product are provided for facial recognition. The method includes receiving, by a processor device, a plurality of images. The method also includes extracting, by the processor device with a feature extractor utilizing a convolutional neural network (CNN) with an enlarged intra-class variance of long-tail classes, feature vectors for each of the plurality of images. The method additionally includes generating, by the processor device with a feature generator, discriminative feature vectors for each of the feature vectors. The method further includes classifying, by the processor device utilizing a fully connected classifier, an identity from the discriminative feature vector. The method also includes control an operation of a processor-based machine to react in accordance with the identity.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: December 1, 2020
    Inventors: Xiang Yu, Xi Yin, Kihyuk Sohn, Manmohan Chandraker
  • Patent number: 10853656
    Abstract: A computer-implemented method, system, and computer program product are provided for activity recognition in a surveillance system. The method includes receiving a plurality of unlabeled videos from one or more cameras. The method also includes classifying an activity in each of the plurality of unlabeled videos. The method additionally includes controlling an operation of a processor-based machine to react in accordance with the activity.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: December 1, 2020
    Inventors: Kihyuk Sohn, Manmohan Chandraker, Xiang Yu
  • Publication number: 20200372614
    Abstract: A method for correcting blur effects is presented. The method includes generating a plurality of images from a camera, synthesizing blurred images from sharp image counterparts to generate training data to train a structure-and-motion-aware convolutional neural network (CNN), and predicting a camera motion and a depth map from a single blurred image by employing the structure-and-motion-aware CNN to remove blurring from the single blurred image.
    Type: Application
    Filed: May 6, 2020
    Publication date: November 26, 2020
    Inventors: Quoc-Huy Tran, Bingbing Zhuang, Pan Ji, Manmohan Chandraker
  • Patent number: 10832084
    Abstract: A method for estimating dense 3D geometric correspondences between two input point clouds by employing a 3D convolutional neural network (CNN) architecture is presented. The method includes, during a training phase, transforming the two input point clouds into truncated distance function voxel grid representations, feeding the truncated distance function voxel grid representations into individual feature extraction layers with tied weights, extracting low-level features from a first feature extraction layer, extracting high-level features from a second feature extraction layer, normalizing the extracted low-level features and high-level features, and applying deep supervision of multiple contrastive losses and multiple hard negative mining modules at the first and second feature extraction layers.
    Type: Grant
    Filed: July 30, 2019
    Date of Patent: November 10, 2020
    Assignee: NEC Corporation
    Inventors: Quoc-Huy Tran, Mohammed E. Fathy Salem, Muhammad Zeeshan Zia, Paul Vernaza, Manmohan Chandraker
  • Patent number: 10832440
    Abstract: A computer-implemented method, system, and computer program product are provided for object detection utilizing an online flow guided memory network. The method includes receiving a plurality of videos, each of the plurality of videos including a plurality of frames. The method also includes generating, with a feature extraction network, a frame feature map for a current frame of the plurality of frames. The method additionally includes aggregating a memory feature map from the frame feature map and previous memory feature maps from previous frames on a plurality of time axes, with the plurality of time axes including a first time axis at a first frame increment and a second time axis at a second frame increment. The method further includes predicting, with a task network, an object from the memory feature map. The method also includes controlling an operation of a processor-based machine to react in accordance with the object.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: November 10, 2020
    Assignee: NEC Corporation
    Inventors: Samuel Schulter, Wongun Choi, Tuan Hung Vu, Manmohan Chandraker
  • Patent number: 10810469
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for extracting material properties from a single digital image portraying one or more materials by utilizing a neural network encoder, a neural network material classifier, and one or more neural network material property decoders. In particular, in one or more embodiments, the disclosed systems and methods train the neural network encoder, the neural network material classifier, and one or more neural network material property decoders to accurately extract material properties from a single digital image portraying one or more materials. Furthermore, in one or more embodiments, the disclosed systems and methods train and utilize a rendering layer to generate model images from the extracted material properties.
    Type: Grant
    Filed: May 9, 2018
    Date of Patent: October 20, 2020
    Assignee: ADOBE INC.
    Inventors: Kalyan Sunkavalli, Zhengqin Li, Manmohan Chandraker