Patents by Inventor Ajay Divakaran

Ajay Divakaran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170160813
    Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.
    Type: Application
    Filed: October 24, 2016
    Publication date: June 8, 2017
    Applicant: SRI International
    Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
  • Patent number: 9563623
    Abstract: Embodiments of the present invention are directed towards methods and apparatus for generating a common operating picture of an event based on the event-specific information extracted from data collected from a plurality of electronic information sources. In some embodiments, a method for generating a common operating picture of an event includes collecting data, comprising image data and textual data, from a plurality of electronic information sources, extracting information related to an event from the data, said extracted information comprising image descriptors, visual features, and categorization tags, by applying statistical analysis and semantic analysis, aligning the extracted information to generate aligned information, recognizing event-specific information for the event based on the aligned information, and generating a common operating picture of the event based on the event-specific information.
    Type: Grant
    Filed: May 21, 2015
    Date of Patent: February 7, 2017
    Assignee: SRI International
    Inventors: Harpreet Sawhney, Jayakrishnan Eledath, Ajay Divakaran, Mayank Bansal, Hui Cheng
  • Publication number: 20160379062
    Abstract: A computer implemented method for determining a vehicle type of a vehicle detected in an image is disclosed. An image having a detected vehicle is received. A number of vehicle models having salient feature points is projected on the detected vehicle. A first set of features derived from each of the salient feature locations of the vehicle models is compared to a second set of features derived from corresponding salient feature locations of the detected vehicle to form a set of positive match scores (p-scores) and a set of negative match scores (n-scores). The detected vehicle is classified as one of the vehicle models models based at least in part on the set of p-scores and the set of n-scores.
    Type: Application
    Filed: October 21, 2014
    Publication date: December 29, 2016
    Inventors: Saad Masood Khan, Hui Cheng, Dennis Lee Matthies, Harpreet Singh Sawhney, Sang-Hack Jung, Chris Broaddus, Bogdan Calin Mihai Matei, Ajay Divakaran
  • Publication number: 20160328384
    Abstract: Technologies to detect persuasive multimedia content by using affective and semantic concepts extracted from the audio-visual content as well as the sentiment of associated comments are disclosed. The multimedia content is analyzed and compared with a persuasiveness model.
    Type: Application
    Filed: October 2, 2015
    Publication date: November 10, 2016
    Inventors: Ajay Divakaran, Behjat Siddiquie, David Chisholm, Elizabeth Shriberg
  • Publication number: 20160154882
    Abstract: A complex video event classification, search and retrieval system can generate a semantic representation of a video or of segments within the video, based on one or more complex events that are depicted in the video, without the need for manual tagging. The system can use the semantic representations to, among other things, provide enhanced video search and retrieval capabilities.
    Type: Application
    Filed: January 25, 2016
    Publication date: June 2, 2016
    Inventors: Hui Cheng, Harpreet Singh Sawhney, Ajay Divakaran, Qian Yu, Jingen Liu, Amir Tamrakar, Saad Ali, Omar Javed
  • Publication number: 20160110433
    Abstract: Methods and apparatuses of the present invention generally relate to generating actionable data based on multimodal data from unsynchronized data sources. In an exemplary embodiment, the method comprises receiving multimodal data from one or more unsynchronized data sources, extracting concepts from the multimodal data, the concepts comprising at least one of objects, actions, scenes and emotions, indexing the concepts for searchability; and generating actionable data based on the concepts.
    Type: Application
    Filed: December 18, 2015
    Publication date: April 21, 2016
    Inventors: Harpreet Singh Sawhney, Jayakrishnan Eledath, Ajay Divakaran, Mayank Bansal, Hui Cheng
  • Publication number: 20160071024
    Abstract: Technologies for analyzing temporal components of multimodal data to detect short-term multimodal events, determine relationships between short-term multimodal events, and recognize long-term multimodal events, using a deep learning architecture, are disclosed.
    Type: Application
    Filed: February 25, 2015
    Publication date: March 10, 2016
    Inventors: Mohamed R. Amer, Behjat Siddiquie, Ajay Divakaran, Colleen Richey, Saad Khan, Harpreet S. Sawhney
  • Publication number: 20160063734
    Abstract: A food recognition assistant system includes technologies to recognize foods and combinations of foods depicted in a digital picture of food. Some embodiments include technologies to estimate portion size and calories, and to estimate nutritional value of the foods. In some embodiments, data identifying recognized foods and related information are generated in an automated fashion without relying on human assistance to identify the foods. In some embodiments, the system includes technologies for achieving automatic food detection and recognition in a real-life setting with a cluttered background, without the images being taken in a controlled lab setting, and without requiring additional user input (such as user-defined bounding boxes). Some embodiments of the system include technologies for personalizing the food classification based on user-specific habits, location and/or other criteria.
    Type: Application
    Filed: December 11, 2014
    Publication date: March 3, 2016
    Inventors: Ajay Divakaran, Weiyu Zhang, Qian Yu, Harpreet S. Sawhney
  • Publication number: 20160063692
    Abstract: A food recognition assistant system includes technologies to recognize foods and combinations of foods depicted in a digital picture of food. Some embodiments include technologies to estimate portion size and calories, and to estimate nutritional value of the foods. In some embodiments, data identifying recognized foods and related information are generated in an automated fashion without relying on human assistance to identify the foods. In some embodiments, the system includes technologies for achieving automatic food detection and recognition in a real-life setting with a cluttered background, without the images being taken in a controlled lab setting, and without requiring additional user input (such as user-defined bounding boxes). Some embodiments of the system include technologies for personalizing the food classification based on user-specific habits, location and/or other criteria.
    Type: Application
    Filed: December 11, 2014
    Publication date: March 3, 2016
    Inventors: Ajay Divakaran, Weiyu Zhang, Qian Yu, Harpreet S. Sawhney
  • Patent number: 9244924
    Abstract: A complex video event classification, search and retrieval system can generate a semantic representation of a video or of segments within the video, based on one or more complex events that are depicted in the video, without the need for manual tagging. The system can use the semantic representations to, among other things, provide enhanced video search and retrieval capabilities.
    Type: Grant
    Filed: January 9, 2013
    Date of Patent: January 26, 2016
    Assignee: SRI INTERNATIONAL
    Inventors: Hui Cheng, Harpreet Singh Sawhney, Ajay Divakaran, Qian Yu, Jingen Liu, Amir Tamrakar, Saad Ali, Omar Javed
  • Publication number: 20160004911
    Abstract: A computing system for recognizing salient events depicted in a video utilizes learning algorithms to detect audio and visual features of the video. The computing system identifies one or more salient events depicted in the video based on the audio and visual features.
    Type: Application
    Filed: September 4, 2015
    Publication date: January 7, 2016
    Inventors: Hui Cheng, Ajay Divakaran, Elizabeth Shriberg, Harpreet Singh Sawhney, Jingen Liu, Ishani Chakraborty, Omar Javed, David Chisolm, Behjat Siddiquie, Steven S. Weiner
  • Publication number: 20150254231
    Abstract: A computer-implemented method comprising collecting data from a plurality of information sources, identifying a geographic location associated with the data and forming a corresponding event according to the geographic location, correlating the data and the event with one or more topics based at least partly on the identified geographic location and storing the correlated data and event and inferring the associated geographic location if the data does not comprise explicit location information, including matching the data against a database of geo-referenced data.
    Type: Application
    Filed: May 21, 2015
    Publication date: September 10, 2015
    Inventors: Harpreet Sawhney, Jayakrishnan Eledath, Ajay Divakaran, Mayank Bansal, Hui Cheng
  • Patent number: 9053194
    Abstract: A computer-implemented method comprising collecting data from a plurality of information sources, identifying a geographic location associated with the data and forming a corresponding event according to the geographic location, correlating the data and the event with one or more topics based at least partly on the identified geographic location and storing the correlated data and event and inferring the associated geographic location if the data does not comprise explicit location information, including matching the data against a database of geo-referenced data.
    Type: Grant
    Filed: May 31, 2012
    Date of Patent: June 9, 2015
    Assignee: SRI International
    Inventors: Harpreet Singh Sawhney, Jayakrishnan Eledath, Ajay Divakaran, Mayank Bansal, Hui Cheng
  • Patent number: 8913783
    Abstract: A computer implemented method for determining a vehicle type of a vehicle detected in an image is disclosed. An image having a detected vehicle is received. A number of vehicle models having salient feature points is projected on the detected vehicle. A first set of features derived from each of the salient feature locations of the vehicle models is compared to a second set of features derived from corresponding salient feature locations of the detected vehicle to form a set of positive match scores (p-scores) and a set of negative match scores (n-scores). The detected vehicle is classified as one of the vehicle models models based at least in part on the set of p-scores and the set of n-scores.
    Type: Grant
    Filed: October 28, 2010
    Date of Patent: December 16, 2014
    Assignee: SRI International
    Inventors: Saad Masood Khan, Hui Cheng, Dennis Lee Matthies, Harpreet Singh Sawhney, Sang-Hack Jung, Chris Broaddus, Bogdan Calin Mihai Matei, Ajay Divakaran
  • Publication number: 20140347475
    Abstract: A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.
    Type: Application
    Filed: May 23, 2014
    Publication date: November 27, 2014
    Inventors: Ajay Divakaran, Qian Yu, Amir Tamrakar, Harpreet Singh Sawhney, Jiejie Zhu, Omar Javed, Jingen Liu, Hui Cheng, Jayakrishnan Eledath
  • Patent number: 8860813
    Abstract: A computer-implemented method for matching objects is disclosed. At least two images where one of the at least two images has a first target object and a second of the at least two images has a second target object are received. At least one first patch from the first target object and at least one second patch from the second target object are extracted. A distance-based part encoding between each of the at least one first patch and the at least one second patch based upon a corresponding codebook of image parts including at least one of part type and pose is constructed. A viewpoint of one of the at least one first patch is warped to a viewpoint of the at least one second patch. A parts level similarity measure based on the view-invarient distance measure for each of the at least one first patch and the at least one second patch is applied to determine whether the first target object and the second target object are the same or different objects.
    Type: Grant
    Filed: December 11, 2012
    Date of Patent: October 14, 2014
    Assignee: SRI International
    Inventors: Sang-Hack Jung, Ajay Divakaran, Harpreet Singh Sawhney
  • Publication number: 20140212854
    Abstract: A multi-modal interaction modeling system can model a number of different aspects of a human interaction across one or more temporal interaction sequences. Some versions of the system can generate assessments of the nature or quality of the interaction or portions thereof, which can be used to, among other things, provide assistance to one or more of the participants in the interaction.
    Type: Application
    Filed: January 31, 2013
    Publication date: July 31, 2014
    Applicant: SRI INTERNATIONAL
    Inventors: Ajay Divakaran, Behjat Siddiquie, Saad Khan, Jeffrey Lubin, Harpreet S. Sawhney
  • Publication number: 20130282747
    Abstract: A complex video event classification, search and retrieval system can generate a semantic representation of a video or of segments within the video, based on one or more complex events that are depicted in the video, without the need for manual tagging. The system can use the semantic representations to, among other things, provide enhanced video search and retrieval capabilities.
    Type: Application
    Filed: January 9, 2013
    Publication date: October 24, 2013
    Applicant: SRI INTERNATIONAL
    Inventors: Hui Cheng, Harpreet Singh Sawhney, Ajay Divakaran, Qian Yu, Jingen Liu, Amir Tamrakar, Saad Ali, Omar Javed
  • Publication number: 20130260345
    Abstract: A method and system for analyzing at least one food item on a food plate is disclosed. A plurality of images of the food plate is received by an image capturing device. A description of the at least one food item on the food plate is received by a recognition device. The description is at least one of a voice description and a text description. At least one processor extracts a list of food items from the description; classifies and segments the at least one food item from the list using color and texture features derived from the plurality of images; and estimates the volume of the classified and segmented at least one food item. The processor is also configured to estimate the caloric content of the at least one food item.
    Type: Application
    Filed: March 22, 2013
    Publication date: October 3, 2013
    Applicant: SRI International
    Inventors: MANIKA PURI, ZHIWEI ZHU, JEFFREY LUBIN, TOM PSCHAR, AJAY DIVAKARAN, HARPREET SAWHNEY
  • Patent number: 8532863
    Abstract: A computer implemented method for unattended detection of a current terrain to be traversed by a mobile device is disclosed. Visual input of the current terrain is received for a plurality of positions. Audio input corresponding to the current terrain is received for the plurality of positions. The video input is fused with the audio input using a classifier. The type of the current terrain is classified with the classifier. The classifier may also be employed to predict the type of terrain proximal to the current terrain. The classifier is constructed using an expectation-maximization (EM) method.
    Type: Grant
    Filed: September 28, 2010
    Date of Patent: September 10, 2013
    Assignee: SRI International
    Inventors: Raia Hadsell, Supun Samarasekera, Ajay Divakaran