Patents by Inventor IAIN MELVIN
IAIN MELVIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240378892Abstract: Systems and methods for optimizing multi-camera multi-entity artificial intelligence tracking systems. Visual and location information of entities from video feeds received from multiple cameras can be obtained by employing an entity detection model and re-identification model. Likelihood scores that entity detections belong to an entity track can be predicted from the visual and location information. The entity detections predicted into entity tracks can be processed by employing combinatorial optimization of the likelihood scores by identifying assumptions from the likelihood scores, entity detections, and the entity tracks, filtering the assumptions with unsatisfiable problems to obtain a filtered assumptions set, and optimizing an answer set by utilizing the filtered assumptions set and the likelihood scores to maximize an overall score and obtain optimized entity tracks. Multiple entities can be monitored by utilizing the optimized entity tracks.Type: ApplicationFiled: May 3, 2024Publication date: November 14, 2024Inventors: Iain Melvin, Alexandru Niculescu-Mizil, Deep Patel
-
Publication number: 20240338393Abstract: Systems and methods are provided for analyzing and visualizing document corpuses based on user-defined semantic features, including initializing a Natural Language Inference (NLI) classification model pre-trained on a diverse linguistic dataset, analyzing a corpus of textual documents with semantic features described in natural language by a user. For each semantic feature, a classification process is executed using the NLI model to assess implication strength between sentences in the documents and the semantic feature, the classification process including a confidence scoring mechanism to quantify implication strength. Implication scores can be aggregated for each of the documents to form a composite semantic implication profile, and a dimensionality reduction technique, can be applied to the composite semantic implication profiles of each of the documents to generate a two-dimensional semantic space representation.Type: ApplicationFiled: April 5, 2024Publication date: October 10, 2024Inventors: Christopher Malon, Iain Melvin, Christopher A. White
-
Publication number: 20240161902Abstract: Methods and systems for tracking movement include performing person detection in frames from multiple video streams to identify detection images. Visual and location information from the detection images are combined to generate scores for pairs of detection images across the multiple video streams and across frames of respective video streams. A pairwise detection graph is generated using the detection images as nodes and the scores as weighted edges. A current view of the multiple video streams is changed to a next view of the multiple video streams, responsive to a determination that a score between consecutive frames of the view is below a threshold value and that a score between coincident frames of the current view and the next view is above the threshold value.Type: ApplicationFiled: November 9, 2023Publication date: May 16, 2024Inventors: Deep Patel, Alexandru Niculescu-Mizil, Iain Melvin, Seonghyeon Moon
-
Publication number: 20240161313Abstract: Methods and systems for tracking movement include performing person detection in frames from multiple video streams to identify detection images. Visual and location information from the detection images are combined to generate scores for pairs of detection images across the multiple video streams and across frames of respective video streams. A pairwise detection graph is generated using the detection images as nodes and the scores as weighted edges. Movement of an individual is tracked based a constrained answer set programming problem, with constraints determined based on matching scores and logical assumptions. An action responsive to the tracked movement is performed. Tracking of movement of a patient in a healthcare facility can be used to inform treatment decisions by healthcare professionals.Type: ApplicationFiled: November 9, 2023Publication date: May 16, 2024Inventors: Deep Patel, Alexandru Niculescu-Mizil, Iain Melvin, Seonghyeon Moon
-
Publication number: 20230315783Abstract: A classification apparatus according to the present disclosure includes an input unit configured to receive an operation performed by a user, an extraction unit configured to extract moving image data by using a predetermined rule, a display control unit configured to display an icon corresponding to the extracted moving image data on a screen of a display unit, a movement detection unit configured to detect a movement of the icon on the screen caused by the operation performed by the user, and a specifying unit configured to specify a classification of the moving image data corresponding to the icon based on a position of the icon on the screen.Type: ApplicationFiled: March 1, 2023Publication date: October 5, 2023Applicant: NEC CorporationInventors: Asako FUJII, Iain MELVIN, Yuki CHIBA, Masayuki SAKATA, Erik KRUUS, Chris WHITE
-
Patent number: 11568247Abstract: A computer-implemented method executed by at least one processor for performing mini-batching in deep learning by improving cache utilization is presented. The method includes temporally localizing a candidate clip in a video stream based on a natural language query, encoding a state, via a state processing module, into a joint visual and linguistic representation, feeding the joint visual and linguistic representation into a policy learning module, wherein the policy learning module employs a deep learning network to selectively extract features for select frames for video-text analysis and includes a fully connected linear layer and a long short-term memory (LSTM), outputting a value function from the LSTM, generating an action policy based on the encoded state, wherein the action policy is a probabilistic distribution over a plurality of possible actions given the encoded state, and rewarding policy actions that return clips matching the natural language query.Type: GrantFiled: March 16, 2020Date of Patent: January 31, 2023Inventors: Asim Kadav, Iain Melvin, Hans Peter Graf, Meera Hahn
-
Publication number: 20220327489Abstract: Systems and methods for matching job descriptions with job applicants is provided. The method includes allocating each of one or more job applicants' curriculum vitae (CV) into sections; applying max pooled word embedding to each section of the job applicants' CVs; using concatenated max-pooling and average-pooling to compose the section embeddings into an applicant's CV representation; allocating each of one or more job position descriptions into specified sections; applying max pooled word embedding to each section of the job position descriptions; using concatenated max-pooling and average-pooling to compose the section embeddings into a job representation; calculating a cosine similarity between each of the job representations and each of the CV representations to perform job-to-applicant matching; and presenting an ordered list of the one or more job applicants or an ordered list of the one or more job position descriptions to a user.Type: ApplicationFiled: April 6, 2022Publication date: October 13, 2022Inventors: Renqiang Min, Iain Melvin, Christopher A White, Christopher Malon, Hans Peter Graf
-
Patent number: 11055605Abstract: A computer-implemented method executed by a processor for training a neural network to recognize driving scenes from sensor data received from vehicle radar is presented. The computer-implemented method includes extracting substructures from the sensor data received from the vehicle radar to define a graph having a plurality of nodes and a plurality of edges, constructing a neural network for each extracted substructure, combining the outputs of each of the constructed neural networks for each of the plurality of edges into a single vector describing a driving scene of a vehicle, and classifying the single vector into a set of one or more dangerous situations involving the vehicle.Type: GrantFiled: October 17, 2017Date of Patent: July 6, 2021Inventors: Hans Peter Graf, Eric Cosatto, Iain Melvin
-
Publication number: 20200302294Abstract: A computer-implemented method executed by at least one processor for performing mini-batching in deep learning by improving cache utilization is presented. The method includes temporally localizing a candidate clip in a video stream based on a natural language query, encoding a state, via a state processing module, into a joint visual and linguistic representation, feeding the joint visual and linguistic representation into a policy learning module, wherein the policy learning module employs a deep learning network to selectively extract features for select frames for video-text analysis and includes a fully connected linear layer and a long short-term memory (LSTM), outputting a value function from the LSTM, generating an action policy based on the encoded state, wherein the action policy is a probabilistic distribution over a plurality of possible actions given the encoded state, and rewarding policy actions that return clips matching the natural language query.Type: ApplicationFiled: March 16, 2020Publication date: September 24, 2020Inventors: Asim Kadav, Iain Melvin, Hans Peter Graf, Meera Hahn
-
Patent number: 10503978Abstract: Systems and methods for improving video understanding tasks based on higher-order object interactions (HOIs) between object features are provided. A plurality of frames of a video are obtained. A coarse-grained feature representation is generated by generating an image feature for each of for each of a plurality of timesteps respectively corresponding to each of the frames and performing attention based on the image features. A fine-grained feature representation is generated by generating an object feature for each of the plurality of timesteps and generating the HOIs between the object features. The coarse-grained and the fine-grained feature representations are concatenated to generate a concatenated feature representation.Type: GrantFiled: May 14, 2018Date of Patent: December 10, 2019Assignee: NEC CorporationInventors: Asim Kadav, Chih-Yao Ma, Iain Melvin, Hans Peter Graf
-
Patent number: 10495753Abstract: A computer-implemented method and system are provided. The system includes an image capture device configured to capture image data relative to an ambient environment of a user. The system further includes a processor configured to detect and localize objects, in a real-world map space, from the image data using a trainable object localization Convolutional Neural Network (CNN). The CNN is trained to detect and localize the objects from image and radar pairs that include the image data and radar data for different scenes of a natural environment. The processor is further configured to perform a user-perceptible action responsive to a detection and a localization of an object in an intended path of the user.Type: GrantFiled: August 29, 2017Date of Patent: December 3, 2019Assignee: NEC CorporationInventors: Iain Melvin, Eric Cosatto, Igor Durdanovic, Hans Peter Graf
-
Patent number: 10330787Abstract: A computer-implemented method and system are provided for driving assistance. The system includes an image capture device configured to capture image data relative to an outward view from a motor vehicle. The system further includes a processor configured to detect and localize objects, in a real-world map space, from the image data using a trainable object localization Convolutional Neural Network (CNN). The CNN is trained to detect and localize the objects from image and radar pairs that include the image data and radar data for different driving scenes of a natural driving environment. The processor is further configured to provide a user-perceptible object detection result to a user of the motor vehicle.Type: GrantFiled: August 29, 2017Date of Patent: June 25, 2019Assignee: NEC CORPORATIONInventors: Iain Melvin, Eric Cosatto, Igor Durdanovic, Hans Peter Graf
-
Patent number: 10296796Abstract: A video device for predicting driving situations while a person drives a car is presented. The video device includes multi-modal sensors and knowledge data for extracting feature maps, a deep neural network trained with training data to recognize real-time traffic scenes (TSs) from a viewpoint of the car, and a user interface (UI) for displaying the real-time TSs. The real-time TSs are compared to predetermined TSs to predict the driving situations. The video device can be a video camera. The video camera can be mounted to a windshield of the car. Alternatively, the video camera can be incorporated into the dashboard or console area of the car. The video camera can calculate speed, velocity, type, and/or position information related to other cars within the real-time TS. The video camera can also include warning indicators, such as light emitting diodes (LEDs) that emit different colors for the different driving situations.Type: GrantFiled: April 4, 2017Date of Patent: May 21, 2019Assignee: NEC CorporationInventors: Eric Cosatto, Iain Melvin, Hans Peter Graf
-
Publication number: 20190019037Abstract: Systems and methods for improving video understanding tasks based on higher-order object interactions (HOIs) between object features are provided. A plurality of frames of a video are obtained. A coarse-grained feature representation is generated by generating an image feature for each of for each of a plurality of timesteps respectively corresponding to each of the frames and performing attention based on the image features. A fine-grained feature representation is generated by generating an object feature for each of the plurality of timesteps and generating the HOIs between the object features. The coarse-grained and the fine-grained feature representations are concatenated to generate a concatenated feature representation.Type: ApplicationFiled: May 14, 2018Publication date: January 17, 2019Inventors: Asim Kadav, Chih-Yao Ma, Iain Melvin, Hans Peter Graf
-
Publication number: 20180307967Abstract: A computer-implemented method executed by a processor for training a neural network to recognize driving scenes from sensor data received from vehicle radar is presented. The computer-implemented method includes extracting substructures from the sensor data received from the vehicle radar to define a graph having a plurality of nodes and a plurality of edges, constructing a neural network for each extracted substructure, combining the outputs of each of the constructed neural networks for each of the plurality of edges into a single vector describing a driving scene of a vehicle, and classifying the single vector into a set of one or more dangerous situations involving the vehicle.Type: ApplicationFiled: October 17, 2017Publication date: October 25, 2018Inventors: Hans Peter Graf, Eric Cosatto, Iain Melvin
-
Publication number: 20180082137Abstract: A computer-implemented method and system are provided for driving assistance. The system includes an image capture device configured to capture image data relative to an outward view from a motor vehicle. The system further includes a processor configured to detect and localize objects, in a real-world map space, from the image data using a trainable object localization Convolutional Neural Network (CNN). The CNN is trained to detect and localize the objects from image and radar pairs that include the image data and radar data for different driving scenes of a natural driving environment. The processor is further configured to provide a user-perceptible object detection result to a user of the motor vehicle.Type: ApplicationFiled: August 29, 2017Publication date: March 22, 2018Inventors: Iain Melvin, Eric Cosatto, Igor Durdanovic, Hans Peter Graf
-
Publication number: 20180081053Abstract: A computer-implemented method and system are provided. The system includes an image capture device configured to capture image data relative to an ambient environment of a user. The system further includes a processor configured to detect and localize objects, in a real-world map space, from the image data using a trainable object localization Convolutional Neural Network (CNN). The CNN is trained to detect and localize the objects from image and radar pairs that include the image data and radar data for different scenes of a natural environment. The processor is further configured to perform a user-perceptible action responsive to a detection and a localization of an object in an intended path of the user.Type: ApplicationFiled: August 29, 2017Publication date: March 22, 2018Inventors: Iain Melvin, Eric Cosatto, Igor Durdanovic, Hans Peter Graf
-
Publication number: 20170293815Abstract: A video device for predicting driving situations while a person drives a car is presented. The video device includes multi-modal sensors and knowledge data for extracting feature maps, a deep neural network trained with training data to recognize real-time traffic scenes (TSs) from a viewpoint of the car, and a user interface (UI) for displaying the real-time TSs. The real-time TSs are compared to predetermined TSs to predict the driving situations. The video device can be a video camera. The video camera can be mounted to a windshield of the car. Alternatively, the video camera can be incorporated into the dashboard or console area of the car. The video camera can calculate speed, velocity, type, and/or position information related to other cars within the real-time TS. The video camera can also include warning indicators, such as light emitting diodes (LEDs) that emit different colors for the different driving situations.Type: ApplicationFiled: April 4, 2017Publication date: October 12, 2017Inventors: Eric Cosatto, Iain Melvin, Hans Peter Graf
-
Publication number: 20170293837Abstract: A computer-implemented method for training a deep neural network to recognize traffic scenes (TSs) from multi-modal sensors and knowledge data is presented. The computer-implemented method includes receiving data from the multi-modal sensors and the knowledge data and extracting feature maps from the multi-modal sensors and the knowledge data by using a traffic participant (TS) extractor to generate a first set of data, using a static objects extractor to generate a second set of data, and using an additional information extractor. The computer-implemented method further includes training the deep neural network, with training data, to recognize the TSs from a viewpoint of a vehicle.Type: ApplicationFiled: April 4, 2017Publication date: October 12, 2017Inventors: Eric Cosatto, Iain Melvin, Hans Peter Graf
-
Patent number: 9336495Abstract: Semantic indexing methods and systems are disclosed. One such method is directed to training a semantic indexing model by employing an expanded query. The query can be expanded by merging the query with documents that are relevant to the query for purposes of compensating for a lack of training data. In accordance with another exemplary aspect, time difference features can be incorporated into a semantic indexing model to account for changes in query distributions over time.Type: GrantFiled: October 28, 2013Date of Patent: May 10, 2016Assignee: NEC CorporationInventors: Bing Bai, Christopher Malon, Iain Melvin