Patents by Inventor Hans Peter Graf

Hans Peter Graf has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190019037
    Abstract: Systems and methods for improving video understanding tasks based on higher-order object interactions (HOIs) between object features are provided. A plurality of frames of a video are obtained. A coarse-grained feature representation is generated by generating an image feature for each of for each of a plurality of timesteps respectively corresponding to each of the frames and performing attention based on the image features. A fine-grained feature representation is generated by generating an object feature for each of the plurality of timesteps and generating the HOIs between the object features. The coarse-grained and the fine-grained feature representations are concatenated to generate a concatenated feature representation.
    Type: Application
    Filed: May 14, 2018
    Publication date: January 17, 2019
    Inventors: Asim Kadav, Chih-Yao Ma, Iain Melvin, Hans Peter Graf
  • Publication number: 20180336468
    Abstract: Systems and methods for pruning a convolutional neural network (CNN) for surveillance with image recognition are described, including extracting convolutional layers from a trained CNN, each convolutional layer including a kernel matrix having at least one filter formed in a corresponding output channel of the kernel matrix, and a feature map set having a feature map corresponding to each filter. An absolute kernel weight is determined for each kernel and summed across each filter to determine a magnitude of each filter. The magnitude of each filter is compared with a threshold and removed if it is below the threshold. A feature map corresponding to each of the removed filters is removed to prune the CNN of filters. The CNN is retrained to generate a pruned CNN having fewer convolutional layers to efficiently recognize and predict conditions in an environment being surveilled.
    Type: Application
    Filed: May 15, 2018
    Publication date: November 22, 2018
    Inventors: Asim Kadav, Igor Durdanovic, Hans Peter Graf
  • Publication number: 20180336425
    Abstract: Systems and methods for surveillance are described, including an image capture device configured to mounted to an autonomous vehicle, the image capture device including an image sensor. A storage device is included in communication with the processing system, the storage device including a pruned convolutional neural network (CNN) being trained to recognize obstacles in a road according to images captured by the image sensor by training a CNN with a dataset and removing filters from layers of the CNN that are below a significance threshold for image recognition to produce the pruned CNN. A processing device is configured to recognize the obstacles by analyzing the images captured by the image sensor with the pruned CNN and to predict movement of the obstacles such that the autonomous vehicle automatically and proactively avoids the obstacle according to the recognized obstacle and predicted movement.
    Type: Application
    Filed: May 15, 2018
    Publication date: November 22, 2018
    Inventors: Asim Kadav, Igor Durdanovic, Hans Peter Graf
  • Publication number: 20180336431
    Abstract: Systems and methods for predicting changes to an environment, including a plurality of remote sensors, each remote sensor being configured to capture images of an environment. A processing device is included on each remote sensor, the processing device configured to recognize and predict a change to the environment using a pruned convolutional neural network (CNN) stored on the processing device, the pruned CNN being trained to recognize features in the environment by training a CNN with a dataset and removing filters from layers of the CNN that are below a significance threshold for image recognition to produce the pruned CNN. A transmitter is configured to transmit the recognized and predicted change to a notification device such that an operator is alerted to the change.
    Type: Application
    Filed: May 15, 2018
    Publication date: November 22, 2018
    Inventors: Asim Kadav, Igor Durdanovic, Hans Peter Graf
  • Publication number: 20180307967
    Abstract: A computer-implemented method executed by a processor for training a neural network to recognize driving scenes from sensor data received from vehicle radar is presented. The computer-implemented method includes extracting substructures from the sensor data received from the vehicle radar to define a graph having a plurality of nodes and a plurality of edges, constructing a neural network for each extracted substructure, combining the outputs of each of the constructed neural networks for each of the plurality of edges into a single vector describing a driving scene of a vehicle, and classifying the single vector into a set of one or more dangerous situations involving the vehicle.
    Type: Application
    Filed: October 17, 2017
    Publication date: October 25, 2018
    Inventors: Hans Peter Graf, Eric Cosatto, Iain Melvin
  • Publication number: 20180082137
    Abstract: A computer-implemented method and system are provided for driving assistance. The system includes an image capture device configured to capture image data relative to an outward view from a motor vehicle. The system further includes a processor configured to detect and localize objects, in a real-world map space, from the image data using a trainable object localization Convolutional Neural Network (CNN). The CNN is trained to detect and localize the objects from image and radar pairs that include the image data and radar data for different driving scenes of a natural driving environment. The processor is further configured to provide a user-perceptible object detection result to a user of the motor vehicle.
    Type: Application
    Filed: August 29, 2017
    Publication date: March 22, 2018
    Inventors: Iain Melvin, Eric Cosatto, Igor Durdanovic, Hans Peter Graf
  • Publication number: 20180081053
    Abstract: A computer-implemented method and system are provided. The system includes an image capture device configured to capture image data relative to an ambient environment of a user. The system further includes a processor configured to detect and localize objects, in a real-world map space, from the image data using a trainable object localization Convolutional Neural Network (CNN). The CNN is trained to detect and localize the objects from image and radar pairs that include the image data and radar data for different scenes of a natural environment. The processor is further configured to perform a user-perceptible action responsive to a detection and a localization of an object in an intended path of the user.
    Type: Application
    Filed: August 29, 2017
    Publication date: March 22, 2018
    Inventors: Iain Melvin, Eric Cosatto, Igor Durdanovic, Hans Peter Graf
  • Publication number: 20170337471
    Abstract: Methods and systems for pruning a convolutional neural network (CNN) include calculating a sum of weights for each filter in a layer of the CNN. The filters in the layer are sorted by respective sums of weights. A set of m filters with the smallest sums of weights is filtered to decrease a computational cost of operating the CNN. The pruned CNN is retrained to repair accuracy loss that results from pruning the filters.
    Type: Application
    Filed: May 9, 2017
    Publication date: November 23, 2017
    Inventors: Asim Kadav, Igor Durdanovic, Hans Peter Graf, Hao Li
  • Publication number: 20170337467
    Abstract: Security systems and methods for detecting intrusion events include one or more sensors configured to monitor an environment. A pruned convolutional neural network (CNN) is configured process information from the one or more sensors to classify events in the monitored environment. CNN filters having the smallest summed weights have been pruned from the pruned CNN. An alert module is configured to detect an intrusion event in the monitored environment based on event classifications. A control module is configured to perform a security action based on the detection of an intrusion event.
    Type: Application
    Filed: May 9, 2017
    Publication date: November 23, 2017
    Inventors: Asim Kadav, Igor Durdanovic, Hans Peter Graf, Hao Li
  • Publication number: 20170337472
    Abstract: Methods and systems of training a neural network includes training a neural network based on training data. Weights of a layer of the neural network are multiplied by an attrition factor. A block of weights is pruned from the layer if the block of weights in the layer has a contribution to an output of the layer that is below a threshold.
    Type: Application
    Filed: May 15, 2017
    Publication date: November 23, 2017
    Inventors: Igor Durdanovic, Hans Peter Graf
  • Publication number: 20170293837
    Abstract: A computer-implemented method for training a deep neural network to recognize traffic scenes (TSs) from multi-modal sensors and knowledge data is presented. The computer-implemented method includes receiving data from the multi-modal sensors and the knowledge data and extracting feature maps from the multi-modal sensors and the knowledge data by using a traffic participant (TS) extractor to generate a first set of data, using a static objects extractor to generate a second set of data, and using an additional information extractor. The computer-implemented method further includes training the deep neural network, with training data, to recognize the TSs from a viewpoint of a vehicle.
    Type: Application
    Filed: April 4, 2017
    Publication date: October 12, 2017
    Inventors: Eric Cosatto, Iain Melvin, Hans Peter Graf
  • Publication number: 20170293815
    Abstract: A video device for predicting driving situations while a person drives a car is presented. The video device includes multi-modal sensors and knowledge data for extracting feature maps, a deep neural network trained with training data to recognize real-time traffic scenes (TSs) from a viewpoint of the car, and a user interface (UI) for displaying the real-time TSs. The real-time TSs are compared to predetermined TSs to predict the driving situations. The video device can be a video camera. The video camera can be mounted to a windshield of the car. Alternatively, the video camera can be incorporated into the dashboard or console area of the car. The video camera can calculate speed, velocity, type, and/or position information related to other cars within the real-time TS. The video camera can also include warning indicators, such as light emitting diodes (LEDs) that emit different colors for the different driving situations.
    Type: Application
    Filed: April 4, 2017
    Publication date: October 12, 2017
    Inventors: Eric Cosatto, Iain Melvin, Hans Peter Graf
  • Patent number: 9583098
    Abstract: A system and method for generating a video sequence having mouth movements synchronized with speech sounds are disclosed. The system utilizes a database of n-phones as the smallest selectable unit, wherein n is larger than 1 and preferably 3. The system calculates a target cost for each candidate n-phone for a target frame using a phonetic distance, coarticulation parameter, and speech rate. For each n-phone in a target sequence, the system searches for candidate n-phones that are visually similar according to the target cost. The system samples each candidate n-phone to get a same number of frames as in the target sequence and builds a video frame lattice of candidate video frames. The system assigns a joint cost to each pair of adjacent frames and searches the video frame lattice to construct the video sequence by finding the optimal path through the lattice according to the minimum of the sum of the target cost and the joint cost over the sequence.
    Type: Grant
    Filed: October 25, 2007
    Date of Patent: February 28, 2017
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Eric Cosatto, Hans Peter Graf, Fu Jie Huang
  • Patent number: 9568611
    Abstract: A system and method for a motorized land vehicle that detects objects obstructing a driver's view of an active road, includes an inertial measurement unit-enabled global position system (GPS/IMU) subsystem for obtaining global position system (GPS) position and heading data of a land vehicle operated by the driver as the vehicle travels along a road, a street map subsystem for obtaining street map data of the GPS position of the vehicle using the GPS position and heading data as the vehicle travels along the road, and a three-dimensional (3D) object detector subsystem for detecting objects ahead of the vehicle and determining a 3D position and 3D size data of each of the detected objects ahead of the vehicle. The street map subsystem merges the street map data, the GPS position and heading data of the vehicle and the 3D position data and 3D size data of the detected objects, to create real-time two-dimensional (2D) top-view map representation of a traffic scene ahead of the vehicle.
    Type: Grant
    Filed: August 20, 2015
    Date of Patent: February 14, 2017
    Assignee: NEC CORPORATION
    Inventors: Eric Cosatto, Hans Peter Graf
  • Patent number: 9503684
    Abstract: A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.
    Type: Grant
    Filed: November 20, 2015
    Date of Patent: November 22, 2016
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Andrea Basso, Eric Cosatto, David Crawford Gibbon, Hans Peter Graf, Shan Liu
  • Publication number: 20160080690
    Abstract: A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.
    Type: Application
    Filed: November 20, 2015
    Publication date: March 17, 2016
    Inventors: Andrea BASSO, Eric COSATTO, David Crawford GIBBON, Hans Peter GRAF, Shan LIU
  • Publication number: 20160054452
    Abstract: A system and method for a motorized land vehicle that detects objects obstructing a driver's view of an active road, includes an inertial measurement unit-enabled global position system (GPS/IMU) subsystem for obtaining global position system (GPS) position and heading data of a land vehicle operated by the driver as the vehicle travels along a road, a street map subsystem for obtaining street map data of the GPS position of the vehicle using the GPS position and heading data as the vehicle travels along the road, and a three-dimensional (3D) object detector subsystem for detecting objects ahead of the vehicle and determining a 3D position and 3D size data of each of the detected objects ahead of the vehicle. The street map subsystem merges the street map data, the GPS position and heading data of the vehicle and the 3D position data and 3D size data of the detected objects, to create real-time two-dimensional (2D) top-view map representation of a traffic scene ahead of the vehicle.
    Type: Application
    Filed: August 20, 2015
    Publication date: February 25, 2016
    Inventors: Eric Cosatto, Hans Peter Graf
  • Patent number: 9224106
    Abstract: Systems and methods are disclosed for classifying histological tissues or specimens with two phases. In a first phase, the method includes providing off-line training using a processor during which one or more classifiers are trained based on examples, including: finding a split of features into sets of increasing computational cost, assigning a computational cost to each set; training for each set of features a classifier using training examples; training for each classifier, a utility function that scores a usefulness of extracting the next feature set for a given tissue unit using the training examples.
    Type: Grant
    Filed: November 12, 2013
    Date of Patent: December 29, 2015
    Assignee: NEC Laboratories America, Inc.
    Inventors: Eric Cosatto, Pierre-Francois Laquerre, Christopher Malon, Hans-Peter Graf, Iain Melvin
  • Patent number: 9208373
    Abstract: A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.
    Type: Grant
    Filed: September 24, 2013
    Date of Patent: December 8, 2015
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Andrea Basso, Eric Cosatto, David Crawford Gibbon, Hans Peter Graf, Shan Liu
  • Patent number: 9060685
    Abstract: Disclosed is a computer implemented method for fully automated tissue diagnosis that trains a region of interest (ROI) classifier in a supervised manner, wherein labels are given only at a tissue level, the training using a multiple-instance learning variant of backpropagation, and trains a tissue classifier that uses the output of the ROI classifier. For a given tissue, the method finds ROIs, extracts feature vectors in each ROI, applies the ROI classifier to each feature vector thereby obtaining a set of probabilities, provides the probabilities to the tissue classifier and outputs a final diagnosis for the whole tissue.
    Type: Grant
    Filed: March 26, 2013
    Date of Patent: June 23, 2015
    Assignee: NEC Laboratories America, Inc.
    Inventors: Eric Cosatto, Pierre-Francois Laquerre, Christopher Malon, Hans-Peter Graf