Patents by Inventor Manmohan Chandraker

Manmohan Chandraker has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11610420
    Abstract: Systems and methods for human detection are provided. The system aligns image level features between a source domain and a target domain based on an adversarial learning process while training a domain discriminator. The target domain includes humans in one or more different scenes. The system selects, using the domain discriminator, unlabeled samples from the target domain that are far away from existing annotated samples from the target domain. The system selects, based on a prediction score of each of the unlabeled samples, samples with lower prediction scores. The system annotates the samples with the lower prediction scores.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: March 21, 2023
    Inventors: Yi-Hsuan Tsai, Kihyuk Sohn, Buyu Liu, Manmohan Chandraker, Jong-Chyi Su
  • Patent number: 11604943
    Abstract: Systems and methods for domain adaptation for structured output via disentangled representations are provided. The system receives a ground truth of a source domain. The ground truth is used in a task loss function for a first convolutional neural network that predicts at least one output based on inputs from the source domain and a target domain. The system clusters the ground truth of the source domain into a predetermined number of clusters, and predicts, via a second convolutional neural network, a structure of label patches. The structure includes an assignment of each of the at least one output of the first convolutional neural network to the predetermined number of clusters. A cluster loss is computed for the predicted structure of label patches, and an adversarial loss function is applied to the predicted structure of label patches to align the source domain and the target domain on a structural level.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: March 14, 2023
    Inventors: Yi-Hsuan Tsai, Samuel Schulter, Kihyuk Sohn, Manmohan Chandraker
  • Patent number: 11604945
    Abstract: Systems and methods for lane marking and road sign recognition are provided. The system aligns image level features between a source domain and a target domain based on an adversarial learning process while training a domain discriminator. The target domain includes one or more road scenes having lane markings and road signs. The system selects, using the domain discriminator, unlabeled samples from the target domain that are far away from existing annotated samples from the target domain. The system selects, based on a prediction score of each of the unlabeled samples, samples with lower prediction scores. The system annotates the samples with the lower prediction scores.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: March 14, 2023
    Inventors: Yi-Hsuan Tsai, Kihyuk Sohn, Buyu Liu, Manmohan Chandraker, Jong-Chyi Su
  • Publication number: 20230073055
    Abstract: A computer-implemented method for rut detection is provided. The method includes detecting, by a rut detection system, areas in a road-scene image that include ruts with pixel-wise probability values, wherein a higher value indicates a better chance of being a rut. The method further includes performing at least one of rut repair and vehicle rut avoidance responsive to the pixel-wise probability values. The detecting step includes performing neural network-based, pixel-wise semantic segmentation with context information on the road-scene image to distinguish rut pixels from non-rut pixels on a road depicted in the road-scene image.
    Type: Application
    Filed: September 6, 2022
    Publication date: March 9, 2023
    Inventors: Yi-Hsuan Tsai, Sparsh Garg, Manmohan Chandraker, Samuel Shulter, Vijay Kumar Baikampady Gopalkrishna
  • Patent number: 11600113
    Abstract: A computer-implemented method for implementing face recognition includes obtaining a face recognition model trained on labeled face data, separating, using a mixture of probability distributions, a plurality of unlabeled faces corresponding to unlabeled face data into a set of one or more overlapping unlabeled faces that include overlapping identities to those in the labeled face data and a set of one or more disjoint unlabeled faces that include disjoint identities to those in the labeled face data, clustering the one or more disjoint unlabeled faces using a graph convolutional network to generate one or more cluster assignments, generating a clustering uncertainty associated with the one or more cluster assignments, and retraining the face recognition model on the labeled face data and the unlabeled face data to improve face recognition performance by incorporating the clustering uncertainty.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: March 7, 2023
    Inventors: Xiang Yu, Manmohan Chandraker, Kihyuk Sohn, Aruni RoyChowdhury
  • Patent number: 11599974
    Abstract: A method for jointly removing rolling shutter (RS) distortions and blur artifacts in a single input RS and blurred image is presented. The method includes generating a plurality of RS blurred images from a camera, synthesizing RS blurred images from a set of GS sharp images, corresponding GS sharp depth maps, and synthesized RS camera motions by employing a structure-and-motion-aware RS distortion and blur rendering module to generate training data to train a single-view joint RS correction and deblurring convolutional neural network (CNN), and predicting an RS rectified and deblurred image from the single input RS and blurred image by employing the single-view joint RS correction and deblurring CNN.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: March 7, 2023
    Inventors: Quoc-Huy Tran, Bingbing Zhuang, Pan Ji, Manmohan Chandraker
  • Patent number: 11594041
    Abstract: Systems and methods for obstacle detection are provided. The system aligns image level features between a source domain and a target domain based on an adversarial learning process while training a domain discriminator. The target domain includes one or more road scenes having obstacles. The system selects, using the domain discriminator, unlabeled samples from the target domain that are far away from existing annotated samples from the target domain. The system selects, based on a prediction score of each of the unlabeled samples, samples with lower prediction scores. The system annotates the samples with the lower prediction scores.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: February 28, 2023
    Inventors: Yi-Hsuan Tsai, Kihyuk Sohn, Buyu Liu, Manmohan Chandraker, Jong-Chyi Su
  • Patent number: 11580334
    Abstract: Systems and methods for construction zone segmentation are provided. The system aligns image level features between a source domain and a target domain based on an adversarial learning process while training a domain discriminator. The target domain includes construction zones scenes having various objects. The system selects, using the domain discriminator, unlabeled samples from the target domain that are far away from existing annotated samples from the target domain. The system selects, based on a prediction score of each of the unlabeled samples, samples with lower prediction scores. The system annotates the samples with the lower prediction scores.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: February 14, 2023
    Inventors: Yi-Hsuan Tsai, Kihyuk Sohn, Buyu Liu, Manmohan Chandraker, Jong-Chyi Su
  • Patent number: 11580780
    Abstract: A computer-implemented method for implementing face recognition includes receiving training data including a plurality of augmented images each corresponding to a respective one of a plurality of input images augmented by one of a plurality of variations, splitting a feature embedding generated from the training data into a plurality of sub-embeddings each associated with one of the plurality of variations, associating each of the plurality of sub-embeddings with respective ones of a plurality of confidence values, and applying a plurality of losses including a confidence-aware identification loss and a variation-decorrelation loss to the plurality of sub-embeddings and the plurality of confidence values to improve face recognition performance by learning the plurality of sub-embeddings.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: February 14, 2023
    Inventors: Xiang Yu, Manmohan Chandraker, Kihyuk Sohn, Yichun Shi
  • Patent number: 11518382
    Abstract: A method is provided for danger prediction. The method includes generating fully-annotated simulated training data for a machine learning model responsive to receiving a set of computer-selected simulator-adjusting parameters. The method further includes training the machine learning model using reinforcement learning on the fully-annotated simulated training data. The method also includes measuring an accuracy of the trained machine learning model relative to learning a discriminative function for a given task. The discriminative function predicts a given label for a given image from the fully-annotated simulated training data. The method additionally includes adjusting the computer-selected simulator-adjusting parameters and repeating said training and measuring steps responsive to the accuracy being below a threshold accuracy.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: December 6, 2022
    Inventors: Samuel Schulter, Nataniel Ruiz, Manmohan Chandraker
  • Patent number: 11520923
    Abstract: A method for protecting visual private data by preventing data reconstruction from latent representations of deep networks is presented. The method includes obtaining latent features from an input image and learning, via an adversarial reconstruction learning framework, privacy-preserving feature representations to maintain utility performance and prevent the data reconstruction by simulating a black-box model inversion attack by training a decoder to reconstruct the input image from the latent features and training an encoder to maximize a reconstruction error to prevent the decoder from inverting the latent features while minimizing the task loss.
    Type: Grant
    Filed: November 5, 2019
    Date of Patent: December 6, 2022
    Inventors: Kihyuk Sohn, Manmohan Chandraker, Yi-Hsuan Tsai
  • Patent number: 11468585
    Abstract: A method for improving geometry-based monocular structure from motion (SfM) by exploiting depth maps predicted by convolutional neural networks (CNNs) is presented. The method includes capturing a sequence of RGB images from an unlabeled monocular video stream obtained by a monocular camera, feeding the RGB images into a depth estimation/refinement module, outputting depth maps, feeding the depth maps and the RGB images to a pose estimation/refinement module, the depths maps and the RGB images collectively defining pseudo RGB-D images, outputting camera poses and point clouds, and constructing a 3D map of a surrounding environment displayed on a visualization device.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: October 11, 2022
    Inventors: Quoc-Huy Tran, Pan Ji, Manmohan Chandraker, Lokender Tiwari
  • Patent number: 11462112
    Abstract: A method is provided in an Advanced Driver-Assistance System (ADAS). The method extracts, from an input video stream including a plurality of images using a multi-task Convolutional Neural Network (CNN), shared features across different perception tasks. The perception tasks include object detection and other perception tasks. The method concurrently solves, using the multi-task CNN, the different perception tasks in a single pass by concurrently processing corresponding ones of the shared features by respective different branches of the multi-task CNN to provide a plurality of different perception task outputs. Each respective different branch corresponds to a respective one of the different perception tasks. The method forms a parametric representation of a driving scene as at least one top-view map responsive to the plurality of different perception task outputs.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: October 4, 2022
    Inventors: Quoc-Huy Tran, Samuel Schulter, Paul Vernaza, Buyu Liu, Pan Ji, Yi-Hsuan Tsai, Manmohan Chandraker
  • Patent number: 11455813
    Abstract: Systems and methods are provided for producing a road layout model. The method includes capturing digital images having a perspective view, converting each of the digital images into top-down images, and conveying a top-down image of time t to a neural network that performs a feature transform to form a feature map of time t. The method also includes transferring the feature map of the top-down image of time t to a feature transform module to warp the feature map to a time t+1, and conveying a top-down image of time t+1 to form a feature map of time t+1. The method also includes combining the warped feature map of time t with the feature map of time t+1 to form a combined feature map, transferring the combined feature map to a long short-term memory (LSTM) module to generate the road layout model, and displaying the road layout model.
    Type: Grant
    Filed: November 12, 2020
    Date of Patent: September 27, 2022
    Inventors: Buyu Liu, Bingbing Zhuang, Samuel Schulter, Manmohan Chandraker
  • Patent number: 11373067
    Abstract: A method for implementing parametric models for scene representation to improve autonomous task performance includes generating an initial map of a scene based on at least one image corresponding to a perspective view of the scene, the initial map including a non-parametric top-view representation of the scene, implementing a parametric model to obtain a scene element representation based on the initial map, the scene element representation providing a description of one or more scene elements of the scene and corresponding to an estimated semantic layout of the scene, identifying one or more predicted locations of the one or more scene elements by performing three-dimensional localization based on the at least one image, and obtaining an overlay for performing an autonomous task by placing the one or more scene elements with the one or more respective predicted locations onto the scene element representation.
    Type: Grant
    Filed: July 30, 2019
    Date of Patent: June 28, 2022
    Inventors: Samuel Schulter, Ziyan Wang, Buyu Liu, Manmohan Chandraker
  • Publication number: 20220147735
    Abstract: A method for employing facial information in unsupervised person re-identification is presented. The method includes extracting, by a body feature extractor, body features from a first data stream, extracting, by a head feature extractor, head features from a second data stream, outputting a body descriptor vector from the body feature extractor, outputting a head descriptor vector from the head feature extractor, and concatenating the body descriptor vector and the head descriptor vector to enable a model to generate a descriptor vector.
    Type: Application
    Filed: November 5, 2021
    Publication date: May 12, 2022
    Inventors: Yumin Suh, Xiang Yu, Yi-Hsuan Tsai, Masoud Faraki, Manmohan Chandraker
  • Publication number: 20220147746
    Abstract: A computer-implemented method for road layout prediction is provided. The method includes segmenting, by a first processor-based element, an RGB image to output pixel-level semantic segmentation results for the RGB image in a perspective view for both visible and occluded pixels in the perspective view based on contextual clues. The method further includes learning, by a second processor-based element, a mapping from the pixel-level semantic segmentation results for the RGB image in the perspective view to a top view of the RGB image using a road plane assumption. The method also includes generating, by a third processor-based element, an occlusion-aware parametric road layout prediction for road layout related attributes in the top view.
    Type: Application
    Filed: November 8, 2021
    Publication date: May 12, 2022
    Inventors: Buyu Liu, Bingbing Zhuang, Manmohan Chandraker
  • Publication number: 20220147767
    Abstract: A method for training a model for face recognition is provided. The method forward trains a training batch of samples to form a face recognition model w(t), and calculates sample weights for the batch. The method obtains a training batch gradient with respect to model weights thereof and updates, using the gradient, the model w(t) to a face recognition model what(t). The method forwards a validation batch of samples to the face recognition model what(t). The method obtains a validation batch gradient, and updates, using the validation batch gradient and what(t), a sample-level importance weight of samples in the training batch to obtain an updated sample-level importance weight. The method obtains a training batch upgraded gradient based on the updated sample-level importance weight of the training batch samples, and updates, using the upgraded gradient, the model w(t) to a trained model w(t+1) corresponding to a next iteration.
    Type: Application
    Filed: November 8, 2021
    Publication date: May 12, 2022
    Inventors: Xiang Yu, Yi-Hsuan Tsai, Masoud Faraki, Ramin Moslemi, Manmohan Chandraker, Chang Liu
  • Publication number: 20220147765
    Abstract: A method for improving face recognition from unseen domains by learning semantically meaningful representations is presented. The method includes obtaining face images with associated identities from a plurality of datasets, randomly selecting two datasets of the plurality of datasets to train a model, sampling batch face images and their corresponding labels, sampling triplet samples including one anchor face image, a sample face image from a same identity, and a sample face image from a different identity than that of the one anchor face image, performing a forward pass by using the samples of the selected two datasets, finding representations of the face images by using a backbone convolutional neural network (CNN), generating covariances from the representations of the face images and the backbone CNN, the covariances made in different spaces by using positive pairs and negative pairs, and employing the covariances to compute a cross-domain similarity loss function.
    Type: Application
    Filed: November 5, 2021
    Publication date: May 12, 2022
    Inventors: Masoud Faraki, Xiang Yu, Yi-Hsuan Tsai, Yumin Suh, Manmohan Chandraker
  • Publication number: 20220144256
    Abstract: A method for driving path prediction is provided. The method concatenates past trajectory features and lane centerline features in a channel dimension at an agent's respective location in a top view map to obtain concatenated features thereat. The method obtains convolutional features derived from the top view map, the concatenated features, and a single representation of the training scene the vehicle and agent interactions. The method extracts hypercolumn descriptor vectors which include the convolutional features from the agent's respective location in the top view map. The method obtains primary and auxiliary trajectory predictions from the hypercolumn descriptor vectors. The method generates a respective score for each of the primary and auxiliary trajectory predictions.
    Type: Application
    Filed: November 8, 2021
    Publication date: May 12, 2022
    Inventors: Sriram Nochur Narayanan, Ramin Moslemi, Francesco Pittaluga, Buyu Liu, Manmohan Chandraker