Patents by Inventor Peter Graf

Peter Graf has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180336425
    Abstract: Systems and methods for surveillance are described, including an image capture device configured to mounted to an autonomous vehicle, the image capture device including an image sensor. A storage device is included in communication with the processing system, the storage device including a pruned convolutional neural network (CNN) being trained to recognize obstacles in a road according to images captured by the image sensor by training a CNN with a dataset and removing filters from layers of the CNN that are below a significance threshold for image recognition to produce the pruned CNN. A processing device is configured to recognize the obstacles by analyzing the images captured by the image sensor with the pruned CNN and to predict movement of the obstacles such that the autonomous vehicle automatically and proactively avoids the obstacle according to the recognized obstacle and predicted movement.
    Type: Application
    Filed: May 15, 2018
    Publication date: November 22, 2018
    Inventors: Asim Kadav, Igor Durdanovic, Hans Peter Graf
  • Publication number: 20180307967
    Abstract: A computer-implemented method executed by a processor for training a neural network to recognize driving scenes from sensor data received from vehicle radar is presented. The computer-implemented method includes extracting substructures from the sensor data received from the vehicle radar to define a graph having a plurality of nodes and a plurality of edges, constructing a neural network for each extracted substructure, combining the outputs of each of the constructed neural networks for each of the plurality of edges into a single vector describing a driving scene of a vehicle, and classifying the single vector into a set of one or more dangerous situations involving the vehicle.
    Type: Application
    Filed: October 17, 2017
    Publication date: October 25, 2018
    Inventors: Hans Peter Graf, Eric Cosatto, Iain Melvin
  • Publication number: 20180082137
    Abstract: A computer-implemented method and system are provided for driving assistance. The system includes an image capture device configured to capture image data relative to an outward view from a motor vehicle. The system further includes a processor configured to detect and localize objects, in a real-world map space, from the image data using a trainable object localization Convolutional Neural Network (CNN). The CNN is trained to detect and localize the objects from image and radar pairs that include the image data and radar data for different driving scenes of a natural driving environment. The processor is further configured to provide a user-perceptible object detection result to a user of the motor vehicle.
    Type: Application
    Filed: August 29, 2017
    Publication date: March 22, 2018
    Inventors: Iain Melvin, Eric Cosatto, Igor Durdanovic, Hans Peter Graf
  • Publication number: 20180081053
    Abstract: A computer-implemented method and system are provided. The system includes an image capture device configured to capture image data relative to an ambient environment of a user. The system further includes a processor configured to detect and localize objects, in a real-world map space, from the image data using a trainable object localization Convolutional Neural Network (CNN). The CNN is trained to detect and localize the objects from image and radar pairs that include the image data and radar data for different scenes of a natural environment. The processor is further configured to perform a user-perceptible action responsive to a detection and a localization of an object in an intended path of the user.
    Type: Application
    Filed: August 29, 2017
    Publication date: March 22, 2018
    Inventors: Iain Melvin, Eric Cosatto, Igor Durdanovic, Hans Peter Graf
  • Publication number: 20170337467
    Abstract: Security systems and methods for detecting intrusion events include one or more sensors configured to monitor an environment. A pruned convolutional neural network (CNN) is configured process information from the one or more sensors to classify events in the monitored environment. CNN filters having the smallest summed weights have been pruned from the pruned CNN. An alert module is configured to detect an intrusion event in the monitored environment based on event classifications. A control module is configured to perform a security action based on the detection of an intrusion event.
    Type: Application
    Filed: May 9, 2017
    Publication date: November 23, 2017
    Inventors: Asim Kadav, Igor Durdanovic, Hans Peter Graf, Hao Li
  • Publication number: 20170337471
    Abstract: Methods and systems for pruning a convolutional neural network (CNN) include calculating a sum of weights for each filter in a layer of the CNN. The filters in the layer are sorted by respective sums of weights. A set of m filters with the smallest sums of weights is filtered to decrease a computational cost of operating the CNN. The pruned CNN is retrained to repair accuracy loss that results from pruning the filters.
    Type: Application
    Filed: May 9, 2017
    Publication date: November 23, 2017
    Inventors: Asim Kadav, Igor Durdanovic, Hans Peter Graf, Hao Li
  • Publication number: 20170337472
    Abstract: Methods and systems of training a neural network includes training a neural network based on training data. Weights of a layer of the neural network are multiplied by an attrition factor. A block of weights is pruned from the layer if the block of weights in the layer has a contribution to an output of the layer that is below a threshold.
    Type: Application
    Filed: May 15, 2017
    Publication date: November 23, 2017
    Inventors: Igor Durdanovic, Hans Peter Graf
  • Publication number: 20170293837
    Abstract: A computer-implemented method for training a deep neural network to recognize traffic scenes (TSs) from multi-modal sensors and knowledge data is presented. The computer-implemented method includes receiving data from the multi-modal sensors and the knowledge data and extracting feature maps from the multi-modal sensors and the knowledge data by using a traffic participant (TS) extractor to generate a first set of data, using a static objects extractor to generate a second set of data, and using an additional information extractor. The computer-implemented method further includes training the deep neural network, with training data, to recognize the TSs from a viewpoint of a vehicle.
    Type: Application
    Filed: April 4, 2017
    Publication date: October 12, 2017
    Inventors: Eric Cosatto, Iain Melvin, Hans Peter Graf
  • Publication number: 20170293815
    Abstract: A video device for predicting driving situations while a person drives a car is presented. The video device includes multi-modal sensors and knowledge data for extracting feature maps, a deep neural network trained with training data to recognize real-time traffic scenes (TSs) from a viewpoint of the car, and a user interface (UI) for displaying the real-time TSs. The real-time TSs are compared to predetermined TSs to predict the driving situations. The video device can be a video camera. The video camera can be mounted to a windshield of the car. Alternatively, the video camera can be incorporated into the dashboard or console area of the car. The video camera can calculate speed, velocity, type, and/or position information related to other cars within the real-time TS. The video camera can also include warning indicators, such as light emitting diodes (LEDs) that emit different colors for the different driving situations.
    Type: Application
    Filed: April 4, 2017
    Publication date: October 12, 2017
    Inventors: Eric Cosatto, Iain Melvin, Hans Peter Graf
  • Patent number: 9587893
    Abstract: A system and a method for promoting improved air flow through a cooling tower and reduced inner air pressure losses caused by rain in the rain zone of a cooling tower. Aerodynamic modules are mounted on the lower edge of the cooling tower shell in order to deflect the downward-flowing air about the lower edge of the tower shell and into the rain zone. The aerodynamic modules can be modularly mounted, can be replaced, and do not affect the statics of the tower shell. Aerodynamic modules can also be built on the base area to deflect the incoming air over any obstacles. Troughs or dripping elements can also promote flow by reducing the rain falling in an outer area.
    Type: Grant
    Filed: November 2, 2011
    Date of Patent: March 7, 2017
    Inventors: Pery Bogh, Peter Graf, Klemens Fisch
  • Publication number: 20170060525
    Abstract: Disclosed herein are an apparatus, non-transitory computer readable medium, and method for tagging multimedia files. A first multimedia file is merged with a voice file so as to embed the voice file at a position of an image enclosed within the first multimedia file. A second multimedia file comprising the first multimedia file with the embedded voice file is generated.
    Type: Application
    Filed: August 24, 2016
    Publication date: March 2, 2017
    Inventors: Peter GRAF, Michael DELL, Daniel BITRAN
  • Patent number: 9583098
    Abstract: A system and method for generating a video sequence having mouth movements synchronized with speech sounds are disclosed. The system utilizes a database of n-phones as the smallest selectable unit, wherein n is larger than 1 and preferably 3. The system calculates a target cost for each candidate n-phone for a target frame using a phonetic distance, coarticulation parameter, and speech rate. For each n-phone in a target sequence, the system searches for candidate n-phones that are visually similar according to the target cost. The system samples each candidate n-phone to get a same number of frames as in the target sequence and builds a video frame lattice of candidate video frames. The system assigns a joint cost to each pair of adjacent frames and searches the video frame lattice to construct the video sequence by finding the optimal path through the lattice according to the minimum of the sum of the target cost and the joint cost over the sequence.
    Type: Grant
    Filed: October 25, 2007
    Date of Patent: February 28, 2017
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Eric Cosatto, Hans Peter Graf, Fu Jie Huang
  • Patent number: 9568611
    Abstract: A system and method for a motorized land vehicle that detects objects obstructing a driver's view of an active road, includes an inertial measurement unit-enabled global position system (GPS/IMU) subsystem for obtaining global position system (GPS) position and heading data of a land vehicle operated by the driver as the vehicle travels along a road, a street map subsystem for obtaining street map data of the GPS position of the vehicle using the GPS position and heading data as the vehicle travels along the road, and a three-dimensional (3D) object detector subsystem for detecting objects ahead of the vehicle and determining a 3D position and 3D size data of each of the detected objects ahead of the vehicle. The street map subsystem merges the street map data, the GPS position and heading data of the vehicle and the 3D position data and 3D size data of the detected objects, to create real-time two-dimensional (2D) top-view map representation of a traffic scene ahead of the vehicle.
    Type: Grant
    Filed: August 20, 2015
    Date of Patent: February 14, 2017
    Assignee: NEC CORPORATION
    Inventors: Eric Cosatto, Hans Peter Graf
  • Patent number: 9503684
    Abstract: A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.
    Type: Grant
    Filed: November 20, 2015
    Date of Patent: November 22, 2016
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Andrea Basso, Eric Cosatto, David Crawford Gibbon, Hans Peter Graf, Shan Liu
  • Patent number: 9435213
    Abstract: A method is described for improving the sealing between rotor and a plurality of blades. The rotor has a plurality of generally axially extending profiled recesses into which a ring of blades, which have corresponding blade root profiles, are inserted in a form-fitting and/or frictionally locking manner into these recesses in a generally axial insertion direction. Between the recesses the rotor has tangential surface sections or circumferential surface sections which extend in the axial direction and circumferential direction and are generally indirectly covered by lower shrouds of circumferentially adjacently arranged blades in the radial direction. At least one of the tangential surface or circumferential surface sections is provided with a step in the radial direction, and a corresponding recess, which adjoins as flush as possible, is provided in the underside of the shroud of the blade which is arranged above it. Corresponding rotors or blades are also described.
    Type: Grant
    Filed: February 8, 2010
    Date of Patent: September 6, 2016
    Assignee: GENERAL ELECTRIC TECHNOLOGY GMBH
    Inventors: Helmar Wunderle, Stefan Schlechtriem, Peter Graf, Silvio Glaser, Beat Von Arx
  • Publication number: 20160080690
    Abstract: A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.
    Type: Application
    Filed: November 20, 2015
    Publication date: March 17, 2016
    Inventors: Andrea BASSO, Eric COSATTO, David Crawford GIBBON, Hans Peter GRAF, Shan LIU
  • Publication number: 20160054452
    Abstract: A system and method for a motorized land vehicle that detects objects obstructing a driver's view of an active road, includes an inertial measurement unit-enabled global position system (GPS/IMU) subsystem for obtaining global position system (GPS) position and heading data of a land vehicle operated by the driver as the vehicle travels along a road, a street map subsystem for obtaining street map data of the GPS position of the vehicle using the GPS position and heading data as the vehicle travels along the road, and a three-dimensional (3D) object detector subsystem for detecting objects ahead of the vehicle and determining a 3D position and 3D size data of each of the detected objects ahead of the vehicle. The street map subsystem merges the street map data, the GPS position and heading data of the vehicle and the 3D position data and 3D size data of the detected objects, to create real-time two-dimensional (2D) top-view map representation of a traffic scene ahead of the vehicle.
    Type: Application
    Filed: August 20, 2015
    Publication date: February 25, 2016
    Inventors: Eric Cosatto, Hans Peter Graf
  • Patent number: 9224106
    Abstract: Systems and methods are disclosed for classifying histological tissues or specimens with two phases. In a first phase, the method includes providing off-line training using a processor during which one or more classifiers are trained based on examples, including: finding a split of features into sets of increasing computational cost, assigning a computational cost to each set; training for each set of features a classifier using training examples; training for each classifier, a utility function that scores a usefulness of extracting the next feature set for a given tissue unit using the training examples.
    Type: Grant
    Filed: November 12, 2013
    Date of Patent: December 29, 2015
    Assignee: NEC Laboratories America, Inc.
    Inventors: Eric Cosatto, Pierre-Francois Laquerre, Christopher Malon, Hans-Peter Graf, Iain Melvin
  • Patent number: 9208373
    Abstract: A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.
    Type: Grant
    Filed: September 24, 2013
    Date of Patent: December 8, 2015
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Andrea Basso, Eric Cosatto, David Crawford Gibbon, Hans Peter Graf, Shan Liu
  • Patent number: 9060685
    Abstract: Disclosed is a computer implemented method for fully automated tissue diagnosis that trains a region of interest (ROI) classifier in a supervised manner, wherein labels are given only at a tissue level, the training using a multiple-instance learning variant of backpropagation, and trains a tissue classifier that uses the output of the ROI classifier. For a given tissue, the method finds ROIs, extracts feature vectors in each ROI, applies the ROI classifier to each feature vector thereby obtaining a set of probabilities, provides the probabilities to the tissue classifier and outputs a final diagnosis for the whole tissue.
    Type: Grant
    Filed: March 26, 2013
    Date of Patent: June 23, 2015
    Assignee: NEC Laboratories America, Inc.
    Inventors: Eric Cosatto, Pierre-Francois Laquerre, Christopher Malon, Hans-Peter Graf