Patents by Inventor Samuel Schulter

Samuel Schulter has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11468591
    Abstract: Systems and methods for road typology scene annotation are provided. A method for road typology scene annotation includes receiving an image having a road scene. The image is received from an imaging device. The method populates, using a machine learning model, a set of attribute settings with values representing the road scene. An annotation interface is implemented and configured to adjust values of the attribute settings to correspond with the road scene. Based on the values of the attribute settings, a simulated overhead view of the respective road scene is generated.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: October 11, 2022
    Inventor: Samuel Schulter
  • Patent number: 11462112
    Abstract: A method is provided in an Advanced Driver-Assistance System (ADAS). The method extracts, from an input video stream including a plurality of images using a multi-task Convolutional Neural Network (CNN), shared features across different perception tasks. The perception tasks include object detection and other perception tasks. The method concurrently solves, using the multi-task CNN, the different perception tasks in a single pass by concurrently processing corresponding ones of the shared features by respective different branches of the multi-task CNN to provide a plurality of different perception task outputs. Each respective different branch corresponds to a respective one of the different perception tasks. The method forms a parametric representation of a driving scene as at least one top-view map responsive to the plurality of different perception task outputs.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: October 4, 2022
    Inventors: Quoc-Huy Tran, Samuel Schulter, Paul Vernaza, Buyu Liu, Pan Ji, Yi-Hsuan Tsai, Manmohan Chandraker
  • Patent number: 11455813
    Abstract: Systems and methods are provided for producing a road layout model. The method includes capturing digital images having a perspective view, converting each of the digital images into top-down images, and conveying a top-down image of time t to a neural network that performs a feature transform to form a feature map of time t. The method also includes transferring the feature map of the top-down image of time t to a feature transform module to warp the feature map to a time t+1, and conveying a top-down image of time t+1 to form a feature map of time t+1. The method also includes combining the warped feature map of time t with the feature map of time t+1 to form a combined feature map, transferring the combined feature map to a long short-term memory (LSTM) module to generate the road layout model, and displaying the road layout model.
    Type: Grant
    Filed: November 12, 2020
    Date of Patent: September 27, 2022
    Inventors: Buyu Liu, Bingbing Zhuang, Samuel Schulter, Manmohan Chandraker
  • Patent number: 11373067
    Abstract: A method for implementing parametric models for scene representation to improve autonomous task performance includes generating an initial map of a scene based on at least one image corresponding to a perspective view of the scene, the initial map including a non-parametric top-view representation of the scene, implementing a parametric model to obtain a scene element representation based on the initial map, the scene element representation providing a description of one or more scene elements of the scene and corresponding to an estimated semantic layout of the scene, identifying one or more predicted locations of the one or more scene elements by performing three-dimensional localization based on the at least one image, and obtaining an overlay for performing an autonomous task by placing the one or more scene elements with the one or more respective predicted locations onto the scene element representation.
    Type: Grant
    Filed: July 30, 2019
    Date of Patent: June 28, 2022
    Inventors: Samuel Schulter, Ziyan Wang, Buyu Liu, Manmohan Chandraker
  • Patent number: 11222238
    Abstract: Methods and systems for object detection include training dataset-specific object detectors using respective annotated datasets, each of the annotated datasets including annotations for a respective set of one or more object classes. The annotated datasets are cross-annotated using the dataset-specific object detectors. A unified object detector is trained, using the cross-annotated datasets, to detect all of the object classes of the annotated datasets. Objects are detected in an input image using the unified object detector.
    Type: Grant
    Filed: November 10, 2020
    Date of Patent: January 11, 2022
    Inventors: Samuel Schulter, Gaurav Sharma, Yi-Hsuan Tsai, Manmohan Chandraker, Xiangyun Zhao
  • Publication number: 20210150751
    Abstract: Methods and systems for occlusion detection include detecting a set of foreground object masks in an image, including a mask of a visible portion of a foreground object and a mask of the foreground object that includes at least one occluded portion, using a machine learning model. A set of background object masks is detected in the image, including a mask of a visible portion of a background object and a mask of the background object that includes at least one occluded portion, using the machine learning model. The set of foreground object masks and the set of background object masks are merged using semantic merging. A computer vision task is performed that accounts for the at least one occluded portion of at least one object of the merged set.
    Type: Application
    Filed: November 12, 2020
    Publication date: May 20, 2021
    Inventors: Buyu Liu, Samuel Schulter, Manmohan Chandraker
  • Publication number: 20210150281
    Abstract: Systems and methods for adapting semantic segmentation across domains is provided. The method includes inputting a source image into a segmentation network, and inputting a target image into the segmentation network. The method further includes identifying category wise features for the source image and the target image using category wise pooling, and discriminating between the category wise features for the source image and the target image. The method further includes training the segmentation network with a pixel-wise cross-entropy loss on the source image, and a weak image classification loss and an adversarial loss on the target image, and outputting a semantically segmented target image.
    Type: Application
    Filed: November 10, 2020
    Publication date: May 20, 2021
    Inventors: Yi-Hsuan Tsai, Samuel Schulter, Manmohan Chandraker, Sujoy Paul
  • Publication number: 20210150275
    Abstract: Methods and systems for object detection include training dataset-specific object detectors using respective annotated datasets, each of the annotated datasets including annotations for a respective set of one or more object classes. The annotated datasets are cross-annotated using the dataset-specific object detectors. A unified object detector is trained, using the cross-annotated datasets, to detect all of the object classes of the annotated datasets. Objects are detected in an input image using the unified object detector.
    Type: Application
    Filed: November 10, 2020
    Publication date: May 20, 2021
    Inventors: Samuel Schulter, Gaurav Sharma, Yi-Hsuan Tsai, Manmohan Chandraker, Xiangyun Zhao
  • Publication number: 20210150203
    Abstract: Systems and methods are provided for producing a road layout model. The method includes capturing digital images having a perspective view, converting each of the digital images into top-down images, and conveying a top-down image of time t to a neural network that performs a feature transform to form a feature map of time t. The method also includes transferring the feature map of the top-down image of time t to a feature transform module to warp the feature map to a time t+1, and conveying a top-down image of time t+1 to form a feature map of time t+1. The method also includes combining the warped feature map of time t with the feature map of time t+1 to form a combined feature map, transferring the combined feature map to a long short-term memory (LSTM) module to generate the road layout model, and displaying the road layout model.
    Type: Application
    Filed: November 12, 2020
    Publication date: May 20, 2021
    Inventors: Buyu Liu, Bingbing Zhuang, Samuel Schulter, Manmohan Chandraker
  • Publication number: 20210064883
    Abstract: A method for performing video domain adaptation for human action recognition is presented. The method includes using annotated source data from a source video and unannotated target data from a target video in an unsupervised domain adaptation setting, identifying and aligning discriminative clips in the source and target videos via an attention mechanism, and learning spatial-background invariant human action representations by employing a self-supervised clip order prediction loss for both the annotated source data and the unannotated target data.
    Type: Application
    Filed: August 20, 2020
    Publication date: March 4, 2021
    Inventors: Gaurav Sharma, Samuel Schulter, Jinwoo Choi
  • Publication number: 20200394814
    Abstract: Systems and methods for road typology scene annotation are provided. A method for road typology scene annotation includes receiving an image having a road scene. The image is received from an imaging device. The method populates, using a machine learning model, a set of attribute settings with values representing the road scene. An annotation interface is implemented and configured to adjust values of the attribute settings to correspond with the road scene. Based on the values of the attribute settings, a simulated overhead view of the respective road scene is generated.
    Type: Application
    Filed: June 2, 2020
    Publication date: December 17, 2020
    Inventor: Samuel Schulter
  • Patent number: 10832440
    Abstract: A computer-implemented method, system, and computer program product are provided for object detection utilizing an online flow guided memory network. The method includes receiving a plurality of videos, each of the plurality of videos including a plurality of frames. The method also includes generating, with a feature extraction network, a frame feature map for a current frame of the plurality of frames. The method additionally includes aggregating a memory feature map from the frame feature map and previous memory feature maps from previous frames on a plurality of time axes, with the plurality of time axes including a first time axis at a first frame increment and a second time axis at a second frame increment. The method further includes predicting, with a task network, an object from the memory feature map. The method also includes controlling an operation of a processor-based machine to react in accordance with the object.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: November 10, 2020
    Assignee: NEC Corporation
    Inventors: Samuel Schulter, Wongun Choi, Tuan Hung Vu, Manmohan Chandraker
  • Publication number: 20200286383
    Abstract: A method is provided in an Advanced Driver-Assistance System (ADAS). The method extracts, from an input video stream including a plurality of images using a multi-task Convolutional Neural Network (CNN), shared features across different perception tasks. The perception tasks include object detection and other perception tasks. The method concurrently solves, using the multi-task CNN, the different perception tasks in a single pass by concurrently processing corresponding ones of the shared features by respective different branches of the multi-task CNN to provide a plurality of different perception task outputs. Each respective different branch corresponds to a respective one of the different perception tasks. The method forms a parametric representation of a driving scene as at least one top-view map responsive to the plurality of different perception task outputs.
    Type: Application
    Filed: February 11, 2020
    Publication date: September 10, 2020
    Inventors: Quoc-Huy Tran, Samuel Schulter, Paul Vernaza, Buyu Liu, Pan Ji, Yi-Hsuan Tsai, Manmohan Chandraker
  • Patent number: 10733756
    Abstract: A computer-implemented method, system, and computer program product are provided for object detection utilizing an online flow guided memory network. The method includes receiving, by a processor, a plurality of videos, each of the plurality of videos including a plurality of frames. The method also includes generating, by the processor with a feature extraction network, a frame feature map for a current frame of the plurality of frames. The method additionally includes determining, by the processor, a memory feature map from the frame feature map and a previous memory feature map from a previous frame by warping the previous memory feature map. The method further includes predicting, by the processor with a task network, an object from the memory feature map. The method also includes controlling an operation of a processor-based machine to react in accordance with the object.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: August 4, 2020
    Assignee: NEC Corporation
    Inventors: Wongun Choi, Samuel Schulter, Tuan Hung Vu, Manmohan Chandraker
  • Patent number: 10678257
    Abstract: Systems and methods for generating an occlusion-aware bird's eye view map of a road scene include identifying foreground objects and background objects in an input image to extract foreground features and background features corresponding to the foreground objects and the background objects, respectively. The foreground objects are masked from the input image with a mask. Occluded objects and depths of the occluded objects are inferred by predicting semantic features and depths in masked areas of the masked image according to contextual information related to the background features visible in the masked image. The foreground objects and the background objects are mapped to a three-dimensional space according to locations of each of the foreground objects, the background objects and occluded objects using the inferred depths. A bird's eye view is generated from the three-dimensional space and displayed with a display device.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: June 9, 2020
    Assignee: NEC Corporation
    Inventors: Samuel Schulter, Paul Vernaza, Manmohan Chandraker, Menghua Zhai
  • Patent number: 10678256
    Abstract: Systems and methods for generating an occlusion-aware bird's eye view map of a road scene include identifying foreground objects and background objects in an input image to extract foreground features and background features corresponding to the foreground objects and the background objects, respectively. The foreground objects are masked from the input image with a mask. Occluded objects and depths of the occluded objects are inferred by predicting semantic features and depths in masked areas of the masked image according to contextual information related to the background features visible in the masked image. The foreground objects and the background objects are mapped to a three-dimensional space according to locations of each of the foreground objects, the background objects and occluded objects using the inferred depths. A bird's eye view is generated from the three-dimensional space and displayed with a display device.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: June 9, 2020
    Assignee: NEC Corporation
    Inventors: Samuel Schulter, Paul Vernaza, Manmohan Chandraker, Menghua Zhai
  • Publication number: 20200094824
    Abstract: A method is provided for danger prediction. The method includes generating fully-annotated simulated training data for a machine learning model responsive to receiving a set of computer-selected simulator-adjusting parameters. The method further includes training the machine learning model using reinforcement learning on the fully-annotated simulated training data. The method also includes measuring an accuracy of the trained machine learning model relative to learning a discriminative function for a given task. The discriminative function predicts a given label for a given image from the fully-annotated simulated training data. The method additionally includes adjusting the computer-selected simulator-adjusting parameters and repeating said training and measuring steps responsive to the accuracy being below a threshold accuracy.
    Type: Application
    Filed: November 26, 2019
    Publication date: March 26, 2020
    Applicant: NEC Laboratories America, Inc.
    Inventors: Samuel Schulter, Nataniel Ruiz, Manmohan Chandraker
  • Publication number: 20200050900
    Abstract: A method for implementing parametric models for scene representation to improve autonomous task performance includes generating an initial map of a scene based on at least one image corresponding to a perspective view of the scene, the initial map including a non-parametric top-view representation of the scene, implementing a parametric model to obtain a scene element representation based on the initial map, the scene element representation providing a description of one or more scene elements of the scene and corresponding to an estimated semantic layout of the scene, identifying one or more predicted locations of the one or more scene elements by performing three-dimensional localization based on the at least one image, and obtaining an overlay for performing an autonomous task by placing the one or more scene elements with the one or more respective predicted locations onto the scene element representation.
    Type: Application
    Filed: July 30, 2019
    Publication date: February 13, 2020
    Inventors: Samuel Schulter, Ziyan Wang, Buyu Liu, Manmohan Chandraker
  • Patent number: 10497143
    Abstract: A system and method are provided for driving assistance. The system includes an image capture device configured to capture a video sequence, relative to an outward view from a vehicle, which includes a set of objects and is formed from a set of image frames. The system includes a processor configured to detect the objects to form a set of object detections, and track the set of object detections over the frames to form tracked detections. The processor is configured to generate for a current frame, responsive to conditions, a set of sparse object proposals for a current location of an object based on: (i) the tracked detections of the object from an immediately previous frame; and (ii) detection proposals for the object derived from the current frame. The processor is configured to perform an action to mitigate a likelihood of potential harmful due to a current object location.
    Type: Grant
    Filed: September 21, 2017
    Date of Patent: December 3, 2019
    Assignee: NEC Corporation
    Inventors: Samuel Schulter, Wongun Choi, Bharat Singh, Manmohan Chandraker
  • Publication number: 20190354807
    Abstract: Systems and methods for domain adaptation for structured output via disentangled representations are provided. The system receives a ground truth of a source domain. The ground truth is used in a task loss function for a first convolutional neural network that predicts at least one output based on inputs from the source domain and a target domain. The system clusters the ground truth of the source domain into a predetermined number of clusters, and predicts, via a second convolutional neural network, a structure of label patches. The structure includes an assignment of each of the at least one output of the first convolutional neural network to the predetermined number of clusters. A cluster loss is computed for the predicted structure of label patches, and an adversarial loss function is applied to the predicted structure of label patches to align the source domain and the target domain on a structural level.
    Type: Application
    Filed: May 1, 2019
    Publication date: November 21, 2019
    Inventors: Yi-Hsuan Tsai, Samuel Schulter, Kihyuk Sohn, Manmohan Chandraker