Patents by Inventor IAIN MELVIN

IAIN MELVIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230315783
    Abstract: A classification apparatus according to the present disclosure includes an input unit configured to receive an operation performed by a user, an extraction unit configured to extract moving image data by using a predetermined rule, a display control unit configured to display an icon corresponding to the extracted moving image data on a screen of a display unit, a movement detection unit configured to detect a movement of the icon on the screen caused by the operation performed by the user, and a specifying unit configured to specify a classification of the moving image data corresponding to the icon based on a position of the icon on the screen.
    Type: Application
    Filed: March 1, 2023
    Publication date: October 5, 2023
    Applicant: NEC Corporation
    Inventors: Asako FUJII, Iain MELVIN, Yuki CHIBA, Masayuki SAKATA, Erik KRUUS, Chris WHITE
  • Patent number: 11568247
    Abstract: A computer-implemented method executed by at least one processor for performing mini-batching in deep learning by improving cache utilization is presented. The method includes temporally localizing a candidate clip in a video stream based on a natural language query, encoding a state, via a state processing module, into a joint visual and linguistic representation, feeding the joint visual and linguistic representation into a policy learning module, wherein the policy learning module employs a deep learning network to selectively extract features for select frames for video-text analysis and includes a fully connected linear layer and a long short-term memory (LSTM), outputting a value function from the LSTM, generating an action policy based on the encoded state, wherein the action policy is a probabilistic distribution over a plurality of possible actions given the encoded state, and rewarding policy actions that return clips matching the natural language query.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: January 31, 2023
    Inventors: Asim Kadav, Iain Melvin, Hans Peter Graf, Meera Hahn
  • Publication number: 20220327489
    Abstract: Systems and methods for matching job descriptions with job applicants is provided. The method includes allocating each of one or more job applicants' curriculum vitae (CV) into sections; applying max pooled word embedding to each section of the job applicants' CVs; using concatenated max-pooling and average-pooling to compose the section embeddings into an applicant's CV representation; allocating each of one or more job position descriptions into specified sections; applying max pooled word embedding to each section of the job position descriptions; using concatenated max-pooling and average-pooling to compose the section embeddings into a job representation; calculating a cosine similarity between each of the job representations and each of the CV representations to perform job-to-applicant matching; and presenting an ordered list of the one or more job applicants or an ordered list of the one or more job position descriptions to a user.
    Type: Application
    Filed: April 6, 2022
    Publication date: October 13, 2022
    Inventors: Renqiang Min, Iain Melvin, Christopher A White, Christopher Malon, Hans Peter Graf
  • Patent number: 11055605
    Abstract: A computer-implemented method executed by a processor for training a neural network to recognize driving scenes from sensor data received from vehicle radar is presented. The computer-implemented method includes extracting substructures from the sensor data received from the vehicle radar to define a graph having a plurality of nodes and a plurality of edges, constructing a neural network for each extracted substructure, combining the outputs of each of the constructed neural networks for each of the plurality of edges into a single vector describing a driving scene of a vehicle, and classifying the single vector into a set of one or more dangerous situations involving the vehicle.
    Type: Grant
    Filed: October 17, 2017
    Date of Patent: July 6, 2021
    Inventors: Hans Peter Graf, Eric Cosatto, Iain Melvin
  • Publication number: 20200302294
    Abstract: A computer-implemented method executed by at least one processor for performing mini-batching in deep learning by improving cache utilization is presented. The method includes temporally localizing a candidate clip in a video stream based on a natural language query, encoding a state, via a state processing module, into a joint visual and linguistic representation, feeding the joint visual and linguistic representation into a policy learning module, wherein the policy learning module employs a deep learning network to selectively extract features for select frames for video-text analysis and includes a fully connected linear layer and a long short-term memory (LSTM), outputting a value function from the LSTM, generating an action policy based on the encoded state, wherein the action policy is a probabilistic distribution over a plurality of possible actions given the encoded state, and rewarding policy actions that return clips matching the natural language query.
    Type: Application
    Filed: March 16, 2020
    Publication date: September 24, 2020
    Inventors: Asim Kadav, Iain Melvin, Hans Peter Graf, Meera Hahn
  • Patent number: 10503978
    Abstract: Systems and methods for improving video understanding tasks based on higher-order object interactions (HOIs) between object features are provided. A plurality of frames of a video are obtained. A coarse-grained feature representation is generated by generating an image feature for each of for each of a plurality of timesteps respectively corresponding to each of the frames and performing attention based on the image features. A fine-grained feature representation is generated by generating an object feature for each of the plurality of timesteps and generating the HOIs between the object features. The coarse-grained and the fine-grained feature representations are concatenated to generate a concatenated feature representation.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: December 10, 2019
    Assignee: NEC Corporation
    Inventors: Asim Kadav, Chih-Yao Ma, Iain Melvin, Hans Peter Graf
  • Patent number: 10495753
    Abstract: A computer-implemented method and system are provided. The system includes an image capture device configured to capture image data relative to an ambient environment of a user. The system further includes a processor configured to detect and localize objects, in a real-world map space, from the image data using a trainable object localization Convolutional Neural Network (CNN). The CNN is trained to detect and localize the objects from image and radar pairs that include the image data and radar data for different scenes of a natural environment. The processor is further configured to perform a user-perceptible action responsive to a detection and a localization of an object in an intended path of the user.
    Type: Grant
    Filed: August 29, 2017
    Date of Patent: December 3, 2019
    Assignee: NEC Corporation
    Inventors: Iain Melvin, Eric Cosatto, Igor Durdanovic, Hans Peter Graf
  • Patent number: 10330787
    Abstract: A computer-implemented method and system are provided for driving assistance. The system includes an image capture device configured to capture image data relative to an outward view from a motor vehicle. The system further includes a processor configured to detect and localize objects, in a real-world map space, from the image data using a trainable object localization Convolutional Neural Network (CNN). The CNN is trained to detect and localize the objects from image and radar pairs that include the image data and radar data for different driving scenes of a natural driving environment. The processor is further configured to provide a user-perceptible object detection result to a user of the motor vehicle.
    Type: Grant
    Filed: August 29, 2017
    Date of Patent: June 25, 2019
    Assignee: NEC CORPORATION
    Inventors: Iain Melvin, Eric Cosatto, Igor Durdanovic, Hans Peter Graf
  • Patent number: 10296796
    Abstract: A video device for predicting driving situations while a person drives a car is presented. The video device includes multi-modal sensors and knowledge data for extracting feature maps, a deep neural network trained with training data to recognize real-time traffic scenes (TSs) from a viewpoint of the car, and a user interface (UI) for displaying the real-time TSs. The real-time TSs are compared to predetermined TSs to predict the driving situations. The video device can be a video camera. The video camera can be mounted to a windshield of the car. Alternatively, the video camera can be incorporated into the dashboard or console area of the car. The video camera can calculate speed, velocity, type, and/or position information related to other cars within the real-time TS. The video camera can also include warning indicators, such as light emitting diodes (LEDs) that emit different colors for the different driving situations.
    Type: Grant
    Filed: April 4, 2017
    Date of Patent: May 21, 2019
    Assignee: NEC Corporation
    Inventors: Eric Cosatto, Iain Melvin, Hans Peter Graf
  • Publication number: 20190019037
    Abstract: Systems and methods for improving video understanding tasks based on higher-order object interactions (HOIs) between object features are provided. A plurality of frames of a video are obtained. A coarse-grained feature representation is generated by generating an image feature for each of for each of a plurality of timesteps respectively corresponding to each of the frames and performing attention based on the image features. A fine-grained feature representation is generated by generating an object feature for each of the plurality of timesteps and generating the HOIs between the object features. The coarse-grained and the fine-grained feature representations are concatenated to generate a concatenated feature representation.
    Type: Application
    Filed: May 14, 2018
    Publication date: January 17, 2019
    Inventors: Asim Kadav, Chih-Yao Ma, Iain Melvin, Hans Peter Graf
  • Publication number: 20180307967
    Abstract: A computer-implemented method executed by a processor for training a neural network to recognize driving scenes from sensor data received from vehicle radar is presented. The computer-implemented method includes extracting substructures from the sensor data received from the vehicle radar to define a graph having a plurality of nodes and a plurality of edges, constructing a neural network for each extracted substructure, combining the outputs of each of the constructed neural networks for each of the plurality of edges into a single vector describing a driving scene of a vehicle, and classifying the single vector into a set of one or more dangerous situations involving the vehicle.
    Type: Application
    Filed: October 17, 2017
    Publication date: October 25, 2018
    Inventors: Hans Peter Graf, Eric Cosatto, Iain Melvin
  • Publication number: 20180082137
    Abstract: A computer-implemented method and system are provided for driving assistance. The system includes an image capture device configured to capture image data relative to an outward view from a motor vehicle. The system further includes a processor configured to detect and localize objects, in a real-world map space, from the image data using a trainable object localization Convolutional Neural Network (CNN). The CNN is trained to detect and localize the objects from image and radar pairs that include the image data and radar data for different driving scenes of a natural driving environment. The processor is further configured to provide a user-perceptible object detection result to a user of the motor vehicle.
    Type: Application
    Filed: August 29, 2017
    Publication date: March 22, 2018
    Inventors: Iain Melvin, Eric Cosatto, Igor Durdanovic, Hans Peter Graf
  • Publication number: 20180081053
    Abstract: A computer-implemented method and system are provided. The system includes an image capture device configured to capture image data relative to an ambient environment of a user. The system further includes a processor configured to detect and localize objects, in a real-world map space, from the image data using a trainable object localization Convolutional Neural Network (CNN). The CNN is trained to detect and localize the objects from image and radar pairs that include the image data and radar data for different scenes of a natural environment. The processor is further configured to perform a user-perceptible action responsive to a detection and a localization of an object in an intended path of the user.
    Type: Application
    Filed: August 29, 2017
    Publication date: March 22, 2018
    Inventors: Iain Melvin, Eric Cosatto, Igor Durdanovic, Hans Peter Graf
  • Publication number: 20170293815
    Abstract: A video device for predicting driving situations while a person drives a car is presented. The video device includes multi-modal sensors and knowledge data for extracting feature maps, a deep neural network trained with training data to recognize real-time traffic scenes (TSs) from a viewpoint of the car, and a user interface (UI) for displaying the real-time TSs. The real-time TSs are compared to predetermined TSs to predict the driving situations. The video device can be a video camera. The video camera can be mounted to a windshield of the car. Alternatively, the video camera can be incorporated into the dashboard or console area of the car. The video camera can calculate speed, velocity, type, and/or position information related to other cars within the real-time TS. The video camera can also include warning indicators, such as light emitting diodes (LEDs) that emit different colors for the different driving situations.
    Type: Application
    Filed: April 4, 2017
    Publication date: October 12, 2017
    Inventors: Eric Cosatto, Iain Melvin, Hans Peter Graf
  • Publication number: 20170293837
    Abstract: A computer-implemented method for training a deep neural network to recognize traffic scenes (TSs) from multi-modal sensors and knowledge data is presented. The computer-implemented method includes receiving data from the multi-modal sensors and the knowledge data and extracting feature maps from the multi-modal sensors and the knowledge data by using a traffic participant (TS) extractor to generate a first set of data, using a static objects extractor to generate a second set of data, and using an additional information extractor. The computer-implemented method further includes training the deep neural network, with training data, to recognize the TSs from a viewpoint of a vehicle.
    Type: Application
    Filed: April 4, 2017
    Publication date: October 12, 2017
    Inventors: Eric Cosatto, Iain Melvin, Hans Peter Graf
  • Patent number: 9336495
    Abstract: Semantic indexing methods and systems are disclosed. One such method is directed to training a semantic indexing model by employing an expanded query. The query can be expanded by merging the query with documents that are relevant to the query for purposes of compensating for a lack of training data. In accordance with another exemplary aspect, time difference features can be incorporated into a semantic indexing model to account for changes in query distributions over time.
    Type: Grant
    Filed: October 28, 2013
    Date of Patent: May 10, 2016
    Assignee: NEC Corporation
    Inventors: Bing Bai, Christopher Malon, Iain Melvin
  • Patent number: 9224106
    Abstract: Systems and methods are disclosed for classifying histological tissues or specimens with two phases. In a first phase, the method includes providing off-line training using a processor during which one or more classifiers are trained based on examples, including: finding a split of features into sets of increasing computational cost, assigning a computational cost to each set; training for each set of features a classifier using training examples; training for each classifier, a utility function that scores a usefulness of extracting the next feature set for a given tissue unit using the training examples.
    Type: Grant
    Filed: November 12, 2013
    Date of Patent: December 29, 2015
    Assignee: NEC Laboratories America, Inc.
    Inventors: Eric Cosatto, Pierre-Francois Laquerre, Christopher Malon, Hans-Peter Graf, Iain Melvin
  • Publication number: 20140180977
    Abstract: Systems and methods are disclosed for classifying histological tissues or specimens with two phases. In a first phase, the method includes providing off-line training using a processor during which one or more classifiers are trained based on examples, including: finding a split of features into sets of increasing computational cost, assigning a computational cost to each set; training for each set of features a classifier using training examples; training for each classifier, a utility function that scores a usefulness of extracting the next feature set for a given tissue unit using the training examples.
    Type: Application
    Filed: November 12, 2013
    Publication date: June 26, 2014
    Applicant: NEC Laboratories America, Inc.
    Inventors: Eric Cosatto, Pierre-Francois Laquerre, Christopher Malon, Hans-Peter Graf, Iain Melvin
  • Patent number: 8706668
    Abstract: Methods and systems for classifying incomplete data are disclosed. In accordance with one method, pairs of features and values are generated based upon feature measurements on the incomplete data. In addition, a transformation function is applied on the pairs of features and values to generate a set of vectors by mapping each of the pairs to a corresponding vector in an embedding space. Further, a hardware processor applies a prediction function to the set of vectors to generate at least one confidence assessment for at least one class that indicates whether the incomplete data is of the at least one class. The method further includes outputting the at least one confidence assessment.
    Type: Grant
    Filed: June 2, 2011
    Date of Patent: April 22, 2014
    Assignee: NEC Laboratories America, Inc.
    Inventors: Iain Melvin, David Grangier
  • Patent number: 8463025
    Abstract: A cell phone having distributed artificial intelligence services is provided. The cell phone includes a neural network for performing a first pass of object recognition on an image to identify objects of interest therein based on one or more criterion. The cell phone also includes a patch generator for deriving patches from the objects of interest. Each of the patches includes a portion of a respective one of the objects of interest. The cell phone additionally includes a transmitter for transmitting the patches to a server for further processing in place of an entirety of the image to reduce network traffic.
    Type: Grant
    Filed: April 26, 2011
    Date of Patent: June 11, 2013
    Assignee: NEC Laboratories America, Inc.
    Inventors: Iain Melvin, Koray Kavukcuoglu, Akshat Aranya, Bing Bai