Patents by Inventor Manmohan Chandraker

Manmohan Chandraker has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10796134
    Abstract: A computer-implemented method, system, and computer program product are provided for facial recognition. The method includes receiving, by a processor device, a plurality of images. The method also includes extracting, by the processor device with a feature extractor utilizing a convolutional neural network (CNN) with an enlarged intra-class variance of long-tail classes, feature vectors for each of the plurality of images. The method additionally includes generating, by the processor device with a feature generator, discriminative feature vectors for each of the feature vectors. The method further includes classifying, by the processor device utilizing a fully connected classifier, an identity from the discriminative feature vector. The method also includes control an operation of a processor-based machine to react in accordance with the identity.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: October 6, 2020
    Assignee: NEC Corporation
    Inventors: Xiang Yu, Xi Yin, Kihyuk Sohn, Manmohan Chandraker
  • Patent number: 10796135
    Abstract: A computer-implemented method, system, and computer program product are provided for facial recognition. The method includes receiving, by a processor device, a plurality of images. The method also includes extracting, by the processor device with a feature extractor utilizing a convolutional neural network (CNN) with an enlarged intra-class variance of long-tail classes, feature vectors for each of the plurality of images. The method additionally includes generating, by the processor device with a feature generator, discriminative feature vectors for each of the feature vectors. The method further includes classifying, by the processor device utilizing a fully connected classifier, an identity from the discriminative feature vector. The method also includes control an operation of a processor-based machine to react in accordance with the identity.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: October 6, 2020
    Assignee: NEC Corporation
    Inventors: Xiang Yu, Xi Yin, Kihyuk Sohn, Manmohan Chandraker
  • Publication number: 20200286383
    Abstract: A method is provided in an Advanced Driver-Assistance System (ADAS). The method extracts, from an input video stream including a plurality of images using a multi-task Convolutional Neural Network (CNN), shared features across different perception tasks. The perception tasks include object detection and other perception tasks. The method concurrently solves, using the multi-task CNN, the different perception tasks in a single pass by concurrently processing corresponding ones of the shared features by respective different branches of the multi-task CNN to provide a plurality of different perception task outputs. Each respective different branch corresponds to a respective one of the different perception tasks. The method forms a parametric representation of a driving scene as at least one top-view map responsive to the plurality of different perception task outputs.
    Type: Application
    Filed: February 11, 2020
    Publication date: September 10, 2020
    Inventors: Quoc-Huy Tran, Samuel Schulter, Paul Vernaza, Buyu Liu, Pan Ji, Yi-Hsuan Tsai, Manmohan Chandraker
  • Patent number: 10762359
    Abstract: Systems and methods for detecting traffic scenarios include an image capturing device which captures two or more images of an area of a traffic environment with each image having a different view of vehicles and a road in the traffic environment. A hierarchical feature extractor concurrently extracts features at multiple neural network layers from each of the images, with the features including geometric features and semantic features, and for estimating correspondences between semantic features for each of the images and refining the estimated correspondences with correspondences between the geometric features of each of the images to generate refined correspondence estimates. A traffic localization module uses the refined correspondence estimates to determine locations of vehicles in the environment in three dimensions to automatically determine a traffic scenario according to the locations of vehicles. A notification device generates a notification of the traffic scenario.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: September 1, 2020
    Assignee: NEC Corporation
    Inventors: Quoc-Huy Tran, Mohammed E. F. Salem, Muhammad Zeeshan Zia, Paul Vernaza, Manmohan Chandraker
  • Patent number: 10740596
    Abstract: A computer-implemented method, system, and computer program product is provided for video security. The method includes monitoring an area with a camera. The method also includes capturing, by the camera, live video to provide a live video stream. The method additionally includes detecting and identifying, by a processor using a recognition neural network feeding into a Siamese reconstruction network, a user in the live video stream by employing one or more pose-invariant features. The method further includes controlling, by the processor, an operation of a processor-based machine to change a state of the processor-based machine, responsive to the identified user in the live video stream.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: August 11, 2020
    Assignee: NEC Corporation
    Inventors: Xiang Yu, Kihyuk Sohn, Manmohan Chandraker
  • Patent number: 10740595
    Abstract: A computer-implemented method, system, and computer program product are provided for facial recognition. The method includes receiving, by a processor device, a plurality of images. The method also includes extracting, by the processor device with a feature extractor utilizing a convolutional neural network (CNN) with an enlarged intra-class variance of long-tail classes, feature vectors for each of the plurality of images. The method additionally includes generating, by the processor device with a feature generator, discriminative feature vectors for each of the feature vectors. The method further includes classifying, by the processor device utilizing a fully connected classifier, an identity from the discriminative feature vector. The method also includes control an operation of a processor-based machine to react in accordance with the identity.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: August 11, 2020
    Assignee: NEC Corporation
    Inventors: Xiang Yu, Xi Yin, Kihyuk Sohn, Manmohan Chandraker
  • Patent number: 10733756
    Abstract: A computer-implemented method, system, and computer program product are provided for object detection utilizing an online flow guided memory network. The method includes receiving, by a processor, a plurality of videos, each of the plurality of videos including a plurality of frames. The method also includes generating, by the processor with a feature extraction network, a frame feature map for a current frame of the plurality of frames. The method additionally includes determining, by the processor, a memory feature map from the frame feature map and a previous memory feature map from a previous frame by warping the previous memory feature map. The method further includes predicting, by the processor with a task network, an object from the memory feature map. The method also includes controlling an operation of a processor-based machine to react in accordance with the object.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: August 4, 2020
    Assignee: NEC Corporation
    Inventors: Wongun Choi, Samuel Schulter, Tuan Hung Vu, Manmohan Chandraker
  • Publication number: 20200234467
    Abstract: Systems and methods for camera self-calibration are provided. The method includes receiving real uncalibrated images, and estimating, using a camera self-calibration network, multiple predicted camera parameters corresponding to the real uncalibrated images. Deep supervision is implemented based on a dependence order between the plurality of predicted camera parameters to place supervision signals across multiple layers according to the dependence order. The method also includes determining calibrated images using the real uncalibrated images and the predicted camera parameters.
    Type: Application
    Filed: January 7, 2020
    Publication date: July 23, 2020
    Inventors: Quoc-Huy Tran, Bingbing Zhuang, Pan Ji, Manmohan Chandraker
  • Patent number: 10706336
    Abstract: An object recognition system is provided that includes a device configured to capture a video sequence formed from unlabeled testing video frames. The system includes a processor configured to pre-train a recognition engine formed from a reference set of CNNs on a still image domain that includes labeled training still image frames. The processor adapts the recognition engine to a video domain to form an adapted recognition engine, by applying a non-reference set of CNNs to a set of domains that include the still image and video domains and a degraded image domain. The degraded image domain includes labeled synthetically degraded versions of the labeled training still image frames included in the still image domain. The video domain includes random unlabeled training video frames. The processor recognizes, using the adapted engine, a set of objects in the video sequence. A display device displays the set of recognized objects.
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: July 7, 2020
    Assignee: NEC Corporation
    Inventors: Kihyuk Sohn, Xiang Yu, Manmohan Chandraker
  • Patent number: 10706582
    Abstract: Systems and methods are described for multithreaded navigation assistance by acquired with a single camera on-board a vehicle, using 2D-3D correspondences for continuous pose estimation, and combining the pose estimation with 2D-2D epipolar search to replenish 3D points.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: July 7, 2020
    Assignee: NEC Corporation
    Inventors: Manmohan Chandraker, Shiyu Song
  • Patent number: 10679075
    Abstract: Systems and methods for correspondence estimation and flexible ground modeling include communicating two-dimensional (2D) images of an environment to a correspondence estimation module, including a first image and a second image captured by an image capturing device. First features, including geometric features and semantic features, are hierarchically extract from the first image with a first convolutional neural network (CNN) according to activation map weights, and second features, including geometric features and semantic features, are hierarchically extracted from the second image with a second CNN according to the activation map weights. Correspondences between the first features and the second features are estimated, including hierarchical fusing of geometric correspondences and semantic correspondences. A 3-dimensional (3D) model of a terrain is estimated using the estimated correspondences belonging to the terrain surface.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: June 9, 2020
    Assignee: NEC Corporation
    Inventors: Quoc-Huy Tran, Mohammed E. F. Salem, Muhammad Zeeshan Zia, Paul Vernaza, Manmohan Chandraker
  • Patent number: 10678256
    Abstract: Systems and methods for generating an occlusion-aware bird's eye view map of a road scene include identifying foreground objects and background objects in an input image to extract foreground features and background features corresponding to the foreground objects and the background objects, respectively. The foreground objects are masked from the input image with a mask. Occluded objects and depths of the occluded objects are inferred by predicting semantic features and depths in masked areas of the masked image according to contextual information related to the background features visible in the masked image. The foreground objects and the background objects are mapped to a three-dimensional space according to locations of each of the foreground objects, the background objects and occluded objects using the inferred depths. A bird's eye view is generated from the three-dimensional space and displayed with a display device.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: June 9, 2020
    Assignee: NEC Corporation
    Inventors: Samuel Schulter, Paul Vernaza, Manmohan Chandraker, Menghua Zhai
  • Patent number: 10678257
    Abstract: Systems and methods for generating an occlusion-aware bird's eye view map of a road scene include identifying foreground objects and background objects in an input image to extract foreground features and background features corresponding to the foreground objects and the background objects, respectively. The foreground objects are masked from the input image with a mask. Occluded objects and depths of the occluded objects are inferred by predicting semantic features and depths in masked areas of the masked image according to contextual information related to the background features visible in the masked image. The foreground objects and the background objects are mapped to a three-dimensional space according to locations of each of the foreground objects, the background objects and occluded objects using the inferred depths. A bird's eye view is generated from the three-dimensional space and displayed with a display device.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: June 9, 2020
    Assignee: NEC Corporation
    Inventors: Samuel Schulter, Paul Vernaza, Manmohan Chandraker, Menghua Zhai
  • Publication number: 20200151457
    Abstract: A computer-implemented method is provided for domain adaptation between a source domain and a target domain. The method includes applying, by a hardware processor, an attention network to features extracted from images included in the source and target domains to provide attended features relating to a given task to be domain adapted between the source and target domains. The method further includes applying, by the hardware processor, a deformation network to at least some of the attended features to align the attended features between the source and target domains using warping to provide attended and warped features. The method also includes training, by the hardware processor, a target domain classifier using the images from the source domain. The method additionally includes classifying, by the hardware processor using the trained target domain classifier, at least one image from the target domain.
    Type: Application
    Filed: November 4, 2019
    Publication date: May 14, 2020
    Inventors: Gaurav Sharma, Manmohan Chandraker
  • Publication number: 20200151940
    Abstract: A system is provided for pose-variant 3D facial attribute generation. A first stage has a hardware processor based 3D regression network for directly generating a space position map for a 3D shape and a camera perspective matrix from a single input image of a face and further having a rendering layer for rendering a partial texture map of the single input image based on the space position map and the camera perspective matrix. A second stage has a hardware processor based two-part stacked Generative Adversarial Network (GAN) including a Texture Completion GAN (TC-GAN) stacked with a 3D Attribute generation GAN (3DA-GAN). The TC-GAN completes the partial texture map to form a complete texture map based on the partial texture map and the space position map. The 3DA-GAN generates a target facial attribute for the single input image based on the complete texture map and the space position map.
    Type: Application
    Filed: November 4, 2019
    Publication date: May 14, 2020
    Inventors: Xiang Yu, Feng-Ju Chang, Manmohan Chandraker
  • Publication number: 20200143079
    Abstract: A method for protecting visual private data by preventing data reconstruction from latent representations of deep networks is presented. The method includes obtaining latent features from an input image and learning, via an adversarial reconstruction learning framework, privacy-preserving feature representations to maintain utility performance and prevent the data reconstruction by simulating a black-box model inversion attack by training a decoder to reconstruct the input image from the latent features and training an encoder to maximize a reconstruction error to prevent the decoder from inverting the latent features while minimizing the task loss.
    Type: Application
    Filed: November 5, 2019
    Publication date: May 7, 2020
    Inventors: Kihyuk Sohn, Manmohan Chandraker, Yi-Hsuan Tsai
  • Publication number: 20200134389
    Abstract: A method for correcting rolling shutter (RS) effects is presented. The method includes generating a plurality of images from a camera, synthesizing RS images from global shutter (GS) counterparts to generate training data to train the structure-and-motion-aware convolutional neural network (CNN), and predicting an RS camera motion and an RS depth map from a single RS image by employing a structure-and-motion-aware CNN to remove RS distortions from the single RS image.
    Type: Application
    Filed: October 4, 2019
    Publication date: April 30, 2020
    Inventors: Quoc-Huy Tran, Bingbing Zhuang, Pan Ji, Manmohan Chandraker
  • Patent number: 10635950
    Abstract: A surveillance system is provided that includes a device configured to capture a video sequence, formed from a set of unlabeled testing video frames, of a target area. The surveillance system further includes a processor configured to pre-train a recognition engine formed from a reference set of CNNs on a still image domain that includes labeled training still image frames. The processor adapts the recognition engine to a video domain to form an adapted recognition engine, by applying a non-reference set of CNNs to domains including the still image and video domains and a degraded image domain. The degraded image domain includes labeled synthetically degraded versions of the frames included in the still image domain. The video domain includes random unlabeled training video frames. The processor recognizes, using the adapted engine, at least one object in the target area. A display device displays the recognized objects.
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: April 28, 2020
    Assignee: NEC Corporation
    Inventors: Kihyuk Sohn, Xiang Yu, Manmohan Chandraker
  • Publication number: 20200094824
    Abstract: A method is provided for danger prediction. The method includes generating fully-annotated simulated training data for a machine learning model responsive to receiving a set of computer-selected simulator-adjusting parameters. The method further includes training the machine learning model using reinforcement learning on the fully-annotated simulated training data. The method also includes measuring an accuracy of the trained machine learning model relative to learning a discriminative function for a given task. The discriminative function predicts a given label for a given image from the fully-annotated simulated training data. The method additionally includes adjusting the computer-selected simulator-adjusting parameters and repeating said training and measuring steps responsive to the accuracy being below a threshold accuracy.
    Type: Application
    Filed: November 26, 2019
    Publication date: March 26, 2020
    Applicant: NEC Laboratories America, Inc.
    Inventors: Samuel Schulter, Nataniel Ruiz, Manmohan Chandraker
  • Publication number: 20200089966
    Abstract: Systems and methods for recognizing fine-grained objects are provided. The system divides unlabeled training data from a target domain into two or more target subdomains using an attribute annotation. The system ranks the target subdomains based on a similarity to the source domain. The system applies multiple domain discriminators between each of the target subdomains and a mixture of the source domain and preceding target domains. The system recognizes, using the multiple domain discriminators for the target domain, fine-grained objects.
    Type: Application
    Filed: September 11, 2019
    Publication date: March 19, 2020
    Inventors: Yi-Hsuan Tsai, Manmohan Chandraker, Shuyang Dai, Kihyuk Sohn