Patents by Inventor Manmohan Chandraker

Manmohan Chandraker has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10853627
    Abstract: A computer-implemented method, system, and computer program product are provided for facial recognition. The method includes receiving, by a processor device, a plurality of images. The method also includes extracting, by the processor device with a feature extractor utilizing a convolutional neural network (CNN) with an enlarged intra-class variance of long-tail classes, feature vectors for each of the plurality of images. The method additionally includes generating, by the processor device with a feature generator, discriminative feature vectors for each of the feature vectors. The method further includes classifying, by the processor device utilizing a fully connected classifier, an identity from the discriminative feature vector. The method also includes control an operation of a processor-based machine to react in accordance with the identity.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: December 1, 2020
    Inventors: Xiang Yu, Xi Yin, Kihyuk Sohn, Manmohan Chandraker
  • Patent number: 10852749
    Abstract: A computer-implemented method, system, and computer program product are provided for pose estimation. The method includes receiving, by a processor, a plurality of images from one or more cameras. The method also includes generating, by the processor with a feature extraction convolutional neural network (CNN), a feature map for each of the plurality of images. The method additionally includes estimating, by the processor with a feature weighting network, a score map from a pair of the feature maps. The method further includes predicting, by the processor with a pose estimation CNN, a pose from the score map and a combined feature map. The method also includes controlling an operation of a processor-based machine to change a state of the processor-based machine, responsive to the pose.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: December 1, 2020
    Inventors: Quoc-Huy Tran, Manmohan Chandraker, Hyo Jin Kim
  • Patent number: 10853654
    Abstract: A computer-implemented method, system, and computer program product are provided for activity recognition. The method includes receiving, by a processor, a plurality of videos, the plurality of videos including labeled videos and unlabeled videos. The method also includes extracting, by the processor with a feature extraction convolutional neural network (CNN), frame features for frames from each of the plurality of videos. The method additionally includes estimating, by the processor with a feature aggregation system, a vector representation for one of the plurality of videos responsive to the frame features. The method further includes classifying, by the processor, an activity from the vector representation. The method also includes controlling an operation of a processor-based machine to react in accordance with the activity.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: December 1, 2020
    Assignee: NEC Corporation
    Inventors: Kihyuk Sohn, Manmohan Chandraker, Xiang Yu
  • Patent number: 10853656
    Abstract: A computer-implemented method, system, and computer program product are provided for activity recognition in a surveillance system. The method includes receiving a plurality of unlabeled videos from one or more cameras. The method also includes classifying an activity in each of the plurality of unlabeled videos. The method additionally includes controlling an operation of a processor-based machine to react in accordance with the activity.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: December 1, 2020
    Inventors: Kihyuk Sohn, Manmohan Chandraker, Xiang Yu
  • Patent number: 10853655
    Abstract: A computer-implemented method, system, and computer program product are provided for activity recognition in a mobile device. The method includes receiving a plurality of unlabeled videos from one or more cameras. The method also includes generating a classified video for each of the plurality of unlabeled videos by classifying an activity in each of the plurality of unlabeled videos. The method additionally includes storing the classified video in a location in a memory designated for videos of the activity in each of the classified videos.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: December 1, 2020
    Inventors: Kihyuk Sohn, Manmohan Chandraker, Xiang Yu
  • Publication number: 20200372614
    Abstract: A method for correcting blur effects is presented. The method includes generating a plurality of images from a camera, synthesizing blurred images from sharp image counterparts to generate training data to train a structure-and-motion-aware convolutional neural network (CNN), and predicting a camera motion and a depth map from a single blurred image by employing the structure-and-motion-aware CNN to remove blurring from the single blurred image.
    Type: Application
    Filed: May 6, 2020
    Publication date: November 26, 2020
    Inventors: Quoc-Huy Tran, Bingbing Zhuang, Pan Ji, Manmohan Chandraker
  • Patent number: 10832084
    Abstract: A method for estimating dense 3D geometric correspondences between two input point clouds by employing a 3D convolutional neural network (CNN) architecture is presented. The method includes, during a training phase, transforming the two input point clouds into truncated distance function voxel grid representations, feeding the truncated distance function voxel grid representations into individual feature extraction layers with tied weights, extracting low-level features from a first feature extraction layer, extracting high-level features from a second feature extraction layer, normalizing the extracted low-level features and high-level features, and applying deep supervision of multiple contrastive losses and multiple hard negative mining modules at the first and second feature extraction layers.
    Type: Grant
    Filed: July 30, 2019
    Date of Patent: November 10, 2020
    Assignee: NEC Corporation
    Inventors: Quoc-Huy Tran, Mohammed E. Fathy Salem, Muhammad Zeeshan Zia, Paul Vernaza, Manmohan Chandraker
  • Patent number: 10832440
    Abstract: A computer-implemented method, system, and computer program product are provided for object detection utilizing an online flow guided memory network. The method includes receiving a plurality of videos, each of the plurality of videos including a plurality of frames. The method also includes generating, with a feature extraction network, a frame feature map for a current frame of the plurality of frames. The method additionally includes aggregating a memory feature map from the frame feature map and previous memory feature maps from previous frames on a plurality of time axes, with the plurality of time axes including a first time axis at a first frame increment and a second time axis at a second frame increment. The method further includes predicting, with a task network, an object from the memory feature map. The method also includes controlling an operation of a processor-based machine to react in accordance with the object.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: November 10, 2020
    Assignee: NEC Corporation
    Inventors: Samuel Schulter, Wongun Choi, Tuan Hung Vu, Manmohan Chandraker
  • Patent number: 10810469
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for extracting material properties from a single digital image portraying one or more materials by utilizing a neural network encoder, a neural network material classifier, and one or more neural network material property decoders. In particular, in one or more embodiments, the disclosed systems and methods train the neural network encoder, the neural network material classifier, and one or more neural network material property decoders to accurately extract material properties from a single digital image portraying one or more materials. Furthermore, in one or more embodiments, the disclosed systems and methods train and utilize a rendering layer to generate model images from the extracted material properties.
    Type: Grant
    Filed: May 9, 2018
    Date of Patent: October 20, 2020
    Assignee: ADOBE INC.
    Inventors: Kalyan Sunkavalli, Zhengqin Li, Manmohan Chandraker
  • Patent number: 10796134
    Abstract: A computer-implemented method, system, and computer program product are provided for facial recognition. The method includes receiving, by a processor device, a plurality of images. The method also includes extracting, by the processor device with a feature extractor utilizing a convolutional neural network (CNN) with an enlarged intra-class variance of long-tail classes, feature vectors for each of the plurality of images. The method additionally includes generating, by the processor device with a feature generator, discriminative feature vectors for each of the feature vectors. The method further includes classifying, by the processor device utilizing a fully connected classifier, an identity from the discriminative feature vector. The method also includes control an operation of a processor-based machine to react in accordance with the identity.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: October 6, 2020
    Assignee: NEC Corporation
    Inventors: Xiang Yu, Xi Yin, Kihyuk Sohn, Manmohan Chandraker
  • Patent number: 10796135
    Abstract: A computer-implemented method, system, and computer program product are provided for facial recognition. The method includes receiving, by a processor device, a plurality of images. The method also includes extracting, by the processor device with a feature extractor utilizing a convolutional neural network (CNN) with an enlarged intra-class variance of long-tail classes, feature vectors for each of the plurality of images. The method additionally includes generating, by the processor device with a feature generator, discriminative feature vectors for each of the feature vectors. The method further includes classifying, by the processor device utilizing a fully connected classifier, an identity from the discriminative feature vector. The method also includes control an operation of a processor-based machine to react in accordance with the identity.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: October 6, 2020
    Assignee: NEC Corporation
    Inventors: Xiang Yu, Xi Yin, Kihyuk Sohn, Manmohan Chandraker
  • Publication number: 20200286383
    Abstract: A method is provided in an Advanced Driver-Assistance System (ADAS). The method extracts, from an input video stream including a plurality of images using a multi-task Convolutional Neural Network (CNN), shared features across different perception tasks. The perception tasks include object detection and other perception tasks. The method concurrently solves, using the multi-task CNN, the different perception tasks in a single pass by concurrently processing corresponding ones of the shared features by respective different branches of the multi-task CNN to provide a plurality of different perception task outputs. Each respective different branch corresponds to a respective one of the different perception tasks. The method forms a parametric representation of a driving scene as at least one top-view map responsive to the plurality of different perception task outputs.
    Type: Application
    Filed: February 11, 2020
    Publication date: September 10, 2020
    Inventors: Quoc-Huy Tran, Samuel Schulter, Paul Vernaza, Buyu Liu, Pan Ji, Yi-Hsuan Tsai, Manmohan Chandraker
  • Patent number: 10762359
    Abstract: Systems and methods for detecting traffic scenarios include an image capturing device which captures two or more images of an area of a traffic environment with each image having a different view of vehicles and a road in the traffic environment. A hierarchical feature extractor concurrently extracts features at multiple neural network layers from each of the images, with the features including geometric features and semantic features, and for estimating correspondences between semantic features for each of the images and refining the estimated correspondences with correspondences between the geometric features of each of the images to generate refined correspondence estimates. A traffic localization module uses the refined correspondence estimates to determine locations of vehicles in the environment in three dimensions to automatically determine a traffic scenario according to the locations of vehicles. A notification device generates a notification of the traffic scenario.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: September 1, 2020
    Assignee: NEC Corporation
    Inventors: Quoc-Huy Tran, Mohammed E. F. Salem, Muhammad Zeeshan Zia, Paul Vernaza, Manmohan Chandraker
  • Patent number: 10740595
    Abstract: A computer-implemented method, system, and computer program product are provided for facial recognition. The method includes receiving, by a processor device, a plurality of images. The method also includes extracting, by the processor device with a feature extractor utilizing a convolutional neural network (CNN) with an enlarged intra-class variance of long-tail classes, feature vectors for each of the plurality of images. The method additionally includes generating, by the processor device with a feature generator, discriminative feature vectors for each of the feature vectors. The method further includes classifying, by the processor device utilizing a fully connected classifier, an identity from the discriminative feature vector. The method also includes control an operation of a processor-based machine to react in accordance with the identity.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: August 11, 2020
    Assignee: NEC Corporation
    Inventors: Xiang Yu, Xi Yin, Kihyuk Sohn, Manmohan Chandraker
  • Patent number: 10740596
    Abstract: A computer-implemented method, system, and computer program product is provided for video security. The method includes monitoring an area with a camera. The method also includes capturing, by the camera, live video to provide a live video stream. The method additionally includes detecting and identifying, by a processor using a recognition neural network feeding into a Siamese reconstruction network, a user in the live video stream by employing one or more pose-invariant features. The method further includes controlling, by the processor, an operation of a processor-based machine to change a state of the processor-based machine, responsive to the identified user in the live video stream.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: August 11, 2020
    Assignee: NEC Corporation
    Inventors: Xiang Yu, Kihyuk Sohn, Manmohan Chandraker
  • Patent number: 10733756
    Abstract: A computer-implemented method, system, and computer program product are provided for object detection utilizing an online flow guided memory network. The method includes receiving, by a processor, a plurality of videos, each of the plurality of videos including a plurality of frames. The method also includes generating, by the processor with a feature extraction network, a frame feature map for a current frame of the plurality of frames. The method additionally includes determining, by the processor, a memory feature map from the frame feature map and a previous memory feature map from a previous frame by warping the previous memory feature map. The method further includes predicting, by the processor with a task network, an object from the memory feature map. The method also includes controlling an operation of a processor-based machine to react in accordance with the object.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: August 4, 2020
    Assignee: NEC Corporation
    Inventors: Wongun Choi, Samuel Schulter, Tuan Hung Vu, Manmohan Chandraker
  • Publication number: 20200234467
    Abstract: Systems and methods for camera self-calibration are provided. The method includes receiving real uncalibrated images, and estimating, using a camera self-calibration network, multiple predicted camera parameters corresponding to the real uncalibrated images. Deep supervision is implemented based on a dependence order between the plurality of predicted camera parameters to place supervision signals across multiple layers according to the dependence order. The method also includes determining calibrated images using the real uncalibrated images and the predicted camera parameters.
    Type: Application
    Filed: January 7, 2020
    Publication date: July 23, 2020
    Inventors: Quoc-Huy Tran, Bingbing Zhuang, Pan Ji, Manmohan Chandraker
  • Patent number: 10706582
    Abstract: Systems and methods are described for multithreaded navigation assistance by acquired with a single camera on-board a vehicle, using 2D-3D correspondences for continuous pose estimation, and combining the pose estimation with 2D-2D epipolar search to replenish 3D points.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: July 7, 2020
    Assignee: NEC Corporation
    Inventors: Manmohan Chandraker, Shiyu Song
  • Patent number: 10706336
    Abstract: An object recognition system is provided that includes a device configured to capture a video sequence formed from unlabeled testing video frames. The system includes a processor configured to pre-train a recognition engine formed from a reference set of CNNs on a still image domain that includes labeled training still image frames. The processor adapts the recognition engine to a video domain to form an adapted recognition engine, by applying a non-reference set of CNNs to a set of domains that include the still image and video domains and a degraded image domain. The degraded image domain includes labeled synthetically degraded versions of the labeled training still image frames included in the still image domain. The video domain includes random unlabeled training video frames. The processor recognizes, using the adapted engine, a set of objects in the video sequence. A display device displays the set of recognized objects.
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: July 7, 2020
    Assignee: NEC Corporation
    Inventors: Kihyuk Sohn, Xiang Yu, Manmohan Chandraker
  • Patent number: 10678256
    Abstract: Systems and methods for generating an occlusion-aware bird's eye view map of a road scene include identifying foreground objects and background objects in an input image to extract foreground features and background features corresponding to the foreground objects and the background objects, respectively. The foreground objects are masked from the input image with a mask. Occluded objects and depths of the occluded objects are inferred by predicting semantic features and depths in masked areas of the masked image according to contextual information related to the background features visible in the masked image. The foreground objects and the background objects are mapped to a three-dimensional space according to locations of each of the foreground objects, the background objects and occluded objects using the inferred depths. A bird's eye view is generated from the three-dimensional space and displayed with a display device.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: June 9, 2020
    Assignee: NEC Corporation
    Inventors: Samuel Schulter, Paul Vernaza, Manmohan Chandraker, Menghua Zhai