Patents by Inventor Manmohan Chandraker

Manmohan Chandraker has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10678257
    Abstract: Systems and methods for generating an occlusion-aware bird's eye view map of a road scene include identifying foreground objects and background objects in an input image to extract foreground features and background features corresponding to the foreground objects and the background objects, respectively. The foreground objects are masked from the input image with a mask. Occluded objects and depths of the occluded objects are inferred by predicting semantic features and depths in masked areas of the masked image according to contextual information related to the background features visible in the masked image. The foreground objects and the background objects are mapped to a three-dimensional space according to locations of each of the foreground objects, the background objects and occluded objects using the inferred depths. A bird's eye view is generated from the three-dimensional space and displayed with a display device.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: June 9, 2020
    Assignee: NEC Corporation
    Inventors: Samuel Schulter, Paul Vernaza, Manmohan Chandraker, Menghua Zhai
  • Patent number: 10679075
    Abstract: Systems and methods for correspondence estimation and flexible ground modeling include communicating two-dimensional (2D) images of an environment to a correspondence estimation module, including a first image and a second image captured by an image capturing device. First features, including geometric features and semantic features, are hierarchically extract from the first image with a first convolutional neural network (CNN) according to activation map weights, and second features, including geometric features and semantic features, are hierarchically extracted from the second image with a second CNN according to the activation map weights. Correspondences between the first features and the second features are estimated, including hierarchical fusing of geometric correspondences and semantic correspondences. A 3-dimensional (3D) model of a terrain is estimated using the estimated correspondences belonging to the terrain surface.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: June 9, 2020
    Assignee: NEC Corporation
    Inventors: Quoc-Huy Tran, Mohammed E. F. Salem, Muhammad Zeeshan Zia, Paul Vernaza, Manmohan Chandraker
  • Publication number: 20200151457
    Abstract: A computer-implemented method is provided for domain adaptation between a source domain and a target domain. The method includes applying, by a hardware processor, an attention network to features extracted from images included in the source and target domains to provide attended features relating to a given task to be domain adapted between the source and target domains. The method further includes applying, by the hardware processor, a deformation network to at least some of the attended features to align the attended features between the source and target domains using warping to provide attended and warped features. The method also includes training, by the hardware processor, a target domain classifier using the images from the source domain. The method additionally includes classifying, by the hardware processor using the trained target domain classifier, at least one image from the target domain.
    Type: Application
    Filed: November 4, 2019
    Publication date: May 14, 2020
    Inventors: Gaurav Sharma, Manmohan Chandraker
  • Publication number: 20200151940
    Abstract: A system is provided for pose-variant 3D facial attribute generation. A first stage has a hardware processor based 3D regression network for directly generating a space position map for a 3D shape and a camera perspective matrix from a single input image of a face and further having a rendering layer for rendering a partial texture map of the single input image based on the space position map and the camera perspective matrix. A second stage has a hardware processor based two-part stacked Generative Adversarial Network (GAN) including a Texture Completion GAN (TC-GAN) stacked with a 3D Attribute generation GAN (3DA-GAN). The TC-GAN completes the partial texture map to form a complete texture map based on the partial texture map and the space position map. The 3DA-GAN generates a target facial attribute for the single input image based on the complete texture map and the space position map.
    Type: Application
    Filed: November 4, 2019
    Publication date: May 14, 2020
    Inventors: Xiang Yu, Feng-Ju Chang, Manmohan Chandraker
  • Publication number: 20200143079
    Abstract: A method for protecting visual private data by preventing data reconstruction from latent representations of deep networks is presented. The method includes obtaining latent features from an input image and learning, via an adversarial reconstruction learning framework, privacy-preserving feature representations to maintain utility performance and prevent the data reconstruction by simulating a black-box model inversion attack by training a decoder to reconstruct the input image from the latent features and training an encoder to maximize a reconstruction error to prevent the decoder from inverting the latent features while minimizing the task loss.
    Type: Application
    Filed: November 5, 2019
    Publication date: May 7, 2020
    Inventors: Kihyuk Sohn, Manmohan Chandraker, Yi-Hsuan Tsai
  • Publication number: 20200134389
    Abstract: A method for correcting rolling shutter (RS) effects is presented. The method includes generating a plurality of images from a camera, synthesizing RS images from global shutter (GS) counterparts to generate training data to train the structure-and-motion-aware convolutional neural network (CNN), and predicting an RS camera motion and an RS depth map from a single RS image by employing a structure-and-motion-aware CNN to remove RS distortions from the single RS image.
    Type: Application
    Filed: October 4, 2019
    Publication date: April 30, 2020
    Inventors: Quoc-Huy Tran, Bingbing Zhuang, Pan Ji, Manmohan Chandraker
  • Patent number: 10635950
    Abstract: A surveillance system is provided that includes a device configured to capture a video sequence, formed from a set of unlabeled testing video frames, of a target area. The surveillance system further includes a processor configured to pre-train a recognition engine formed from a reference set of CNNs on a still image domain that includes labeled training still image frames. The processor adapts the recognition engine to a video domain to form an adapted recognition engine, by applying a non-reference set of CNNs to domains including the still image and video domains and a degraded image domain. The degraded image domain includes labeled synthetically degraded versions of the frames included in the still image domain. The video domain includes random unlabeled training video frames. The processor recognizes, using the adapted engine, at least one object in the target area. A display device displays the recognized objects.
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: April 28, 2020
    Assignee: NEC Corporation
    Inventors: Kihyuk Sohn, Xiang Yu, Manmohan Chandraker
  • Publication number: 20200094824
    Abstract: A method is provided for danger prediction. The method includes generating fully-annotated simulated training data for a machine learning model responsive to receiving a set of computer-selected simulator-adjusting parameters. The method further includes training the machine learning model using reinforcement learning on the fully-annotated simulated training data. The method also includes measuring an accuracy of the trained machine learning model relative to learning a discriminative function for a given task. The discriminative function predicts a given label for a given image from the fully-annotated simulated training data. The method additionally includes adjusting the computer-selected simulator-adjusting parameters and repeating said training and measuring steps responsive to the accuracy being below a threshold accuracy.
    Type: Application
    Filed: November 26, 2019
    Publication date: March 26, 2020
    Applicant: NEC Laboratories America, Inc.
    Inventors: Samuel Schulter, Nataniel Ruiz, Manmohan Chandraker
  • Publication number: 20200089966
    Abstract: Systems and methods for recognizing fine-grained objects are provided. The system divides unlabeled training data from a target domain into two or more target subdomains using an attribute annotation. The system ranks the target subdomains based on a similarity to the source domain. The system applies multiple domain discriminators between each of the target subdomains and a mixture of the source domain and preceding target domains. The system recognizes, using the multiple domain discriminators for the target domain, fine-grained objects.
    Type: Application
    Filed: September 11, 2019
    Publication date: March 19, 2020
    Inventors: Yi-Hsuan Tsai, Manmohan Chandraker, Shuyang Dai, Kihyuk Sohn
  • Patent number: 10595037
    Abstract: Methods and systems for predicting a trajectory include determining prediction samples for agents in a scene based on a past trajectory. The prediction samples are ranked according to a likelihood score that incorporates interactions between agents and semantic scene context. The prediction samples are iteratively refined using a regression function that accumulates scene context and agent interactions across iterations. A response activity is triggered when the prediction samples satisfy a predetermined condition.
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: March 17, 2020
    Assignee: NEC Corporation
    Inventors: Wongun Choi, Paul Vernaza, Manmohan Chandraker, Namhoon Lee
  • Publication number: 20200082221
    Abstract: Systems and methods for domain adaptation are provided. The system aligns image level features between a source domain and a target domain based on an adversarial learning process while training a domain discriminator. The system selects, using the domain discriminator, unlabeled samples from the target domain that are far away from existing annotated samples from the target domain. The system selects, based on a prediction score of each of the unlabeled samples, samples with lower prediction scores. The system annotates the samples with the lower prediction scores.
    Type: Application
    Filed: August 8, 2019
    Publication date: March 12, 2020
    Inventors: Yi-Hsuan Tsai, Kihyuk Sohn, Buyu Liu, Manmohan Chandraker, Jong-Chyi Su
  • Publication number: 20200065975
    Abstract: A method is provided for drone-video-based action recognition. The method learns a transformation for each of target video clips taken from a set of target videos, responsive to original features extracted from the target video clips. The transformation corrects differences between a target drone domain corresponding to the target video clips and a source non-drone domain corresponding to source video clips taken from a set of source videos. The method adapts the target to the source domain by applying the transformation to the original features to obtain transformed features for the target video clips. The method converts the original and transformed features of same ones of the target video clips into a single classification feature for each of the target videos. The method classifies a human action in a new target video relative to the set of source videos using the single classification feature for each of the target videos.
    Type: Application
    Filed: July 18, 2019
    Publication date: February 27, 2020
    Inventors: Gaurav Sharma, Manmohan Chandraker, Jinwoo Choi
  • Publication number: 20200065617
    Abstract: A method is provided for unsupervised domain adaptation for video classification. The method learns a transformation for each target video clips taken from a set of target videos, responsive to original features extracted from the target video clips. The transformation corrects differences between a target domain corresponding to target video clips and a source domain corresponding to source video clips taken from a set of source videos. The method adapts the target to the source domain by applying the transformation to the original features extracted to obtain transformed features for the plurality of target video clips. The method converts the original and transformed features of same ones of the target video clips into a single classification feature for each of the target videos. The method classifies a new target video relative to the set of source videos using the single classification feature for each of the target videos.
    Type: Application
    Filed: July 18, 2019
    Publication date: February 27, 2020
    Inventors: Gaurav Sharma, Manmohan Chandraker, Jinwoo Choi
  • Publication number: 20200058156
    Abstract: A method for estimating dense 3D geometric correspondences between two input point clouds by employing a 3D convolutional neural network (CNN) architecture is presented. The method includes, during a training phase, transforming the two input point clouds into truncated distance function voxel grid representations, feeding the truncated distance function voxel grid representations into individual feature extraction layers with tied weights, extracting low-level features from a first feature extraction layer, extracting high-level features from a second feature extraction layer, normalizing the extracted low-level features and high-level features, and applying deep supervision of multiple contrastive losses and multiple hard negative mining modules at the first and second feature extraction layers.
    Type: Application
    Filed: July 30, 2019
    Publication date: February 20, 2020
    Inventors: Quoc-Huy Tran, Mohammed E. Fathy Salem, Muhammad Zeeshan Zia, Paul Vernaza, Manmohan Chandraker
  • Publication number: 20200050900
    Abstract: A method for implementing parametric models for scene representation to improve autonomous task performance includes generating an initial map of a scene based on at least one image corresponding to a perspective view of the scene, the initial map including a non-parametric top-view representation of the scene, implementing a parametric model to obtain a scene element representation based on the initial map, the scene element representation providing a description of one or more scene elements of the scene and corresponding to an estimated semantic layout of the scene, identifying one or more predicted locations of the one or more scene elements by performing three-dimensional localization based on the at least one image, and obtaining an overlay for performing an autonomous task by placing the one or more scene elements with the one or more respective predicted locations onto the scene element representation.
    Type: Application
    Filed: July 30, 2019
    Publication date: February 13, 2020
    Inventors: Samuel Schulter, Ziyan Wang, Buyu Liu, Manmohan Chandraker
  • Patent number: 10497257
    Abstract: Systems and methods for vehicle surveillance include a camera for capturing target images of vehicles. An object recognition system is in communication with the camera, the object recognition system including a processor for executing a synthesizer module for generating a plurality of viewpoints of a vehicle depicted in a source image, and a domain adaptation module for performing domain adaptation between the viewpoints of the vehicle and the target images to classifying vehicles of the target images regardless of the viewpoint represented in the target images. A display is in communication with the object recognition system for displaying each of the target images with labels corresponding to the vehicles of the target images.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: December 3, 2019
    Assignee: NEC Corporation
    Inventors: Kihyuk Sohn, Luan Tran, Xiang Yu, Manmohan Chandraker
  • Patent number: 10497143
    Abstract: A system and method are provided for driving assistance. The system includes an image capture device configured to capture a video sequence, relative to an outward view from a vehicle, which includes a set of objects and is formed from a set of image frames. The system includes a processor configured to detect the objects to form a set of object detections, and track the set of object detections over the frames to form tracked detections. The processor is configured to generate for a current frame, responsive to conditions, a set of sparse object proposals for a current location of an object based on: (i) the tracked detections of the object from an immediately previous frame; and (ii) detection proposals for the object derived from the current frame. The processor is configured to perform an action to mitigate a likelihood of potential harmful due to a current object location.
    Type: Grant
    Filed: September 21, 2017
    Date of Patent: December 3, 2019
    Assignee: NEC Corporation
    Inventors: Samuel Schulter, Wongun Choi, Bharat Singh, Manmohan Chandraker
  • Publication number: 20190354807
    Abstract: Systems and methods for domain adaptation for structured output via disentangled representations are provided. The system receives a ground truth of a source domain. The ground truth is used in a task loss function for a first convolutional neural network that predicts at least one output based on inputs from the source domain and a target domain. The system clusters the ground truth of the source domain into a predetermined number of clusters, and predicts, via a second convolutional neural network, a structure of label patches. The structure includes an assignment of each of the at least one output of the first convolutional neural network to the predetermined number of clusters. A cluster loss is computed for the predicted structure of label patches, and an adversarial loss function is applied to the predicted structure of label patches to align the source domain and the target domain on a structural level.
    Type: Application
    Filed: May 1, 2019
    Publication date: November 21, 2019
    Inventors: Yi-Hsuan Tsai, Samuel Schulter, Kihyuk Sohn, Manmohan Chandraker
  • Publication number: 20190354801
    Abstract: A method for implementing an unsupervised cross-domain distance metric adaptation framework with a feature transfer network for enhancing facial recognition includes recursively training a feature transfer network and automatic labeling of target domain data using a clustering method, and implementing the feature transfer network and the automatic labeling to perform a facial recognition task.
    Type: Application
    Filed: May 1, 2019
    Publication date: November 21, 2019
    Inventors: Kihyuk Sohn, Manmohan Chandraker, Xiang Yu
  • Publication number: 20190347526
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for extracting material properties from a single digital image portraying one or more materials by utilizing a neural network encoder, a neural network material classifier, and one or more neural network material property decoders. In particular, in one or more embodiments, the disclosed systems and methods train the neural network encoder, the neural network material classifier, and one or more neural network material property decoders to accurately extract material properties from a single digital image portraying one or more materials. Furthermore, in one or more embodiments, the disclosed systems and methods train and utilize a rendering layer to generate model images from the extracted material properties.
    Type: Application
    Filed: May 9, 2018
    Publication date: November 14, 2019
    Inventors: Kalyan Sunkavalli, Zhengqin Li, Manmohan Chandraker