Patents by Inventor Amir Tamrakar

Amir Tamrakar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220301290
    Abstract: This disclosure describes techniques for improving accuracy of machine learning systems in facial recognition. The techniques include generating, from a training image comprising a plurality of pixels and labeled with a plurality of facial landmarks, one or more facial contour heatmaps, wherein each of the one or more facial contour heatmaps depicts an estimate of a location of one or more facial contours within the training image. Techniques further include training a machine learning model to process the one or more facial contour heatmaps to predict the location of the one or more facial contours within the training image, wherein training the machine learning model comprises applying a loss function to minimize a distance between the predicted location of the one or more facial contours within the training image and corresponding contour data generated from facial landmarks of the plurality of facial landmarks with which the training image is labeled.
    Type: Application
    Filed: March 15, 2022
    Publication date: September 22, 2022
    Inventors: Jihua Huang, Amir Tamrakar
  • Patent number: 11279279
    Abstract: An evaluation engine has two or more modules to assist a driver of a vehicle. A driver drowsiness module analyzes monitored features of the driver to recognize two or more levels of drowsiness of the driver of the vehicle. The driver drowsiness module evaluates drowsiness of the driver based on observed body language and facial analysis of the driver. The driver drowsiness module is configured to analyze live multi-modal sensor inputs from sensors against at least one of i) a trained artificial intelligence model and ii) a rules based model while the driver is driving the vehicle to produce an output comprising a driver drowsiness-level estimation. A driver assistance module provides one or more positive assistance mechanisms to the driver to return the driver to be at or above the designated level of drowsiness.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: March 22, 2022
    Assignees: SRI International, Toyota Motor Corporation
    Inventors: Amir Tamrakar, Girish Acharya, Makoto Okabe, John James Byrnes
  • Publication number: 20210390492
    Abstract: In some examples, a computer-implemented collaboration assessment model identifies actions of each of two or more individuals depicted in video data, identify, based at least on the identified actions of each of the two or more individuals depicted in the video data, first behaviors at a first collaboration assessment level, identify, based at least on the identified actions of each of the two or more individuals depicted in the video data, second behaviors at a second collaboration assessment level different from the first collaboration assessment level, and generate and output, based at least on the first behaviors at the first collaboration assessment level and the second behaviors at the second collaboration assessment level, an indication of at least one of an assessment of a collaboration effort of the two or more individuals or respective assessments of individual contributions of the two or more individuals to the collaboration effort.
    Type: Application
    Filed: June 15, 2021
    Publication date: December 16, 2021
    Inventors: Swati Dhamija, Amir Tamrakar, Nonye M. Alozie, Elizabeth McBride, Ajay Divakaran, Anirudh Som, Sujeong Kim, Bladimir Lopez-Prado
  • Publication number: 20210129748
    Abstract: An evaluation engine has two or more modules to assist a driver of a vehicle. A driver drowsiness module analyzes monitored features of the driver to recognize two or more levels of drowsiness of the driver of the vehicle. The driver drowsiness module evaluates drowsiness of the driver based on observed body language and facial analysis of the driver. The driver drowsiness module is configured to analyze live multi-modal sensor inputs from sensors against at least one of i) a trained artificial intelligence model and ii) a rules based model while the driver is driving the vehicle to produce an output comprising a driver drowsiness-level estimation. A driver assistance module provides one or more positive assistance mechanisms to the driver to return the driver to be at or above the designated level of drowsiness.
    Type: Application
    Filed: December 19, 2017
    Publication date: May 6, 2021
    Inventors: Amir Tamrakar, Girish Acharya, Makoto Okabe, John James Bymes
  • Publication number: 20210081056
    Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.
    Type: Application
    Filed: December 1, 2020
    Publication date: March 18, 2021
    Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
  • Patent number: 10884503
    Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: January 5, 2021
    Assignee: SRI International
    Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
  • Patent number: 10789755
    Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: September 29, 2020
    Assignee: SRI International
    Inventors: Mohamed R. Amer, Timothy J. Meo, Aswin Nadamuni Raghavan, Alex C. Tozzo, Amir Tamrakar, David A. Salter, Kyung-Yoon Kim
  • Patent number: 10769459
    Abstract: A method and a system are provided for monitoring driving conditions. The method includes receiving video data comprising video frames from one or more sensors where the video frames may represent an interior or exterior of a vehicle, detecting and recognizing one or more features from the video data where each feature is associated with at least one driving condition, extracting the one or more features from the video data, developing intermediate features by associating and aggregating the extracted features among the extracted features, and developing a semantic meaning for the at least one driving condition by utilizing the intermediate features and the extracted one or more features.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: September 8, 2020
    Assignee: SRI International
    Inventors: Amir Tamrakar, Gregory Ho, David Salter, Jihua Huang
  • Publication number: 20190304157
    Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.
    Type: Application
    Filed: December 21, 2018
    Publication date: October 3, 2019
    Inventors: Mohamed R. Amer, Timothy J. Meo, Aswin Nadamuni Raghavan, Alex C. Tozzo, Amir Tamrakar, David A. Salter, Kyung-Yoon Kim
  • Patent number: 10268900
    Abstract: A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: April 23, 2019
    Assignee: SRI International
    Inventors: Ajay Divakaran, Qian Yu, Amir Tamrakar, Harpreet Singh Sawhney, Jiejie Zhu, Omar Javed, Jingen Liu, Hui Cheng, Jayakrishnan Eledath
  • Patent number: 10198509
    Abstract: A complex video event classification, search and retrieval system can generate a semantic representation of a video or of segments within the video, based on one or more complex events that are depicted in the video, without the need for manual tagging. The system can use the semantic representations to, among other things, provide enhanced video search and retrieval capabilities.
    Type: Grant
    Filed: January 25, 2016
    Date of Patent: February 5, 2019
    Assignee: SRI International
    Inventors: Hui Cheng, Harpreet Singh Sawhney, Ajay Divakaran, Qian Yu, Jingen Liu, Amir Tamrakar, Saad Ali, Omar Javed
  • Publication number: 20190034814
    Abstract: Technologies for analyzing multi-task multimodal data to detect multi-task multimodal events using a deep multi-task representation learning, are disclosed. A combined model with both generative and discriminative aspects is used to share information during both generative and discriminative processes. The technologies can be used to classify data and also to generate data from classification events. The data can then be used to morph data into a desired classification event.
    Type: Application
    Filed: March 17, 2017
    Publication date: January 31, 2019
    Inventors: Mohamed R. AMER, Timothy J. Shields, Amir TAMRAKAR, Max EHLRICH, Timur ALMAEV
  • Publication number: 20180239975
    Abstract: A method and a system are provided for monitoring driving conditions. The method includes receiving video data comprising video frames from one or more sensors where the video frames may represent an interior or exterior of a vehicle, detecting and recognizing one or more features from the video data where each feature is associated with at least one driving condition, extracting the one or more features from the video data, developing intermediate features by associating and aggregating the extracted features among the extracted features, and developing a semantic meaning for the at least one driving condition by utilizing the intermediate features and the extracted one or more features.
    Type: Application
    Filed: August 30, 2016
    Publication date: August 23, 2018
    Inventors: Amir TAMRAKAR, Gregory HO, David SALTER, Jihua HUANG
  • Publication number: 20180189573
    Abstract: A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.
    Type: Application
    Filed: February 27, 2018
    Publication date: July 5, 2018
    Inventors: Ajay Divakaran, Qian Yu, Amir Tamrakar, Harpreet Singh Sawhney, Jiejie Zhu, Omar Javed, Jingen Liu, Hui Cheng, Jayakrishnan Eledath
  • Patent number: 9904852
    Abstract: A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.
    Type: Grant
    Filed: May 23, 2014
    Date of Patent: February 27, 2018
    Assignee: SRI International
    Inventors: Ajay Divakaran, Qian Yu, Amir Tamrakar, Harpreet Singh Sawhney, Jiejie Zhu, Omar Javed, Jingen Liu, Hui Cheng, Jayakrishnan Eledath
  • Publication number: 20170160813
    Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.
    Type: Application
    Filed: October 24, 2016
    Publication date: June 8, 2017
    Applicant: SRI International
    Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
  • Publication number: 20160154882
    Abstract: A complex video event classification, search and retrieval system can generate a semantic representation of a video or of segments within the video, based on one or more complex events that are depicted in the video, without the need for manual tagging. The system can use the semantic representations to, among other things, provide enhanced video search and retrieval capabilities.
    Type: Application
    Filed: January 25, 2016
    Publication date: June 2, 2016
    Inventors: Hui Cheng, Harpreet Singh Sawhney, Ajay Divakaran, Qian Yu, Jingen Liu, Amir Tamrakar, Saad Ali, Omar Javed
  • Patent number: 9244924
    Abstract: A complex video event classification, search and retrieval system can generate a semantic representation of a video or of segments within the video, based on one or more complex events that are depicted in the video, without the need for manual tagging. The system can use the semantic representations to, among other things, provide enhanced video search and retrieval capabilities.
    Type: Grant
    Filed: January 9, 2013
    Date of Patent: January 26, 2016
    Assignee: SRI INTERNATIONAL
    Inventors: Hui Cheng, Harpreet Singh Sawhney, Ajay Divakaran, Qian Yu, Jingen Liu, Amir Tamrakar, Saad Ali, Omar Javed
  • Publication number: 20140347475
    Abstract: A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.
    Type: Application
    Filed: May 23, 2014
    Publication date: November 27, 2014
    Inventors: Ajay Divakaran, Qian Yu, Amir Tamrakar, Harpreet Singh Sawhney, Jiejie Zhu, Omar Javed, Jingen Liu, Hui Cheng, Jayakrishnan Eledath
  • Publication number: 20130282747
    Abstract: A complex video event classification, search and retrieval system can generate a semantic representation of a video or of segments within the video, based on one or more complex events that are depicted in the video, without the need for manual tagging. The system can use the semantic representations to, among other things, provide enhanced video search and retrieval capabilities.
    Type: Application
    Filed: January 9, 2013
    Publication date: October 24, 2013
    Applicant: SRI INTERNATIONAL
    Inventors: Hui Cheng, Harpreet Singh Sawhney, Ajay Divakaran, Qian Yu, Jingen Liu, Amir Tamrakar, Saad Ali, Omar Javed