Patents by Inventor Amir Tamrakar
Amir Tamrakar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220301290Abstract: This disclosure describes techniques for improving accuracy of machine learning systems in facial recognition. The techniques include generating, from a training image comprising a plurality of pixels and labeled with a plurality of facial landmarks, one or more facial contour heatmaps, wherein each of the one or more facial contour heatmaps depicts an estimate of a location of one or more facial contours within the training image. Techniques further include training a machine learning model to process the one or more facial contour heatmaps to predict the location of the one or more facial contours within the training image, wherein training the machine learning model comprises applying a loss function to minimize a distance between the predicted location of the one or more facial contours within the training image and corresponding contour data generated from facial landmarks of the plurality of facial landmarks with which the training image is labeled.Type: ApplicationFiled: March 15, 2022Publication date: September 22, 2022Inventors: Jihua Huang, Amir Tamrakar
-
Patent number: 11279279Abstract: An evaluation engine has two or more modules to assist a driver of a vehicle. A driver drowsiness module analyzes monitored features of the driver to recognize two or more levels of drowsiness of the driver of the vehicle. The driver drowsiness module evaluates drowsiness of the driver based on observed body language and facial analysis of the driver. The driver drowsiness module is configured to analyze live multi-modal sensor inputs from sensors against at least one of i) a trained artificial intelligence model and ii) a rules based model while the driver is driving the vehicle to produce an output comprising a driver drowsiness-level estimation. A driver assistance module provides one or more positive assistance mechanisms to the driver to return the driver to be at or above the designated level of drowsiness.Type: GrantFiled: December 19, 2017Date of Patent: March 22, 2022Assignees: SRI International, Toyota Motor CorporationInventors: Amir Tamrakar, Girish Acharya, Makoto Okabe, John James Byrnes
-
Publication number: 20210390492Abstract: In some examples, a computer-implemented collaboration assessment model identifies actions of each of two or more individuals depicted in video data, identify, based at least on the identified actions of each of the two or more individuals depicted in the video data, first behaviors at a first collaboration assessment level, identify, based at least on the identified actions of each of the two or more individuals depicted in the video data, second behaviors at a second collaboration assessment level different from the first collaboration assessment level, and generate and output, based at least on the first behaviors at the first collaboration assessment level and the second behaviors at the second collaboration assessment level, an indication of at least one of an assessment of a collaboration effort of the two or more individuals or respective assessments of individual contributions of the two or more individuals to the collaboration effort.Type: ApplicationFiled: June 15, 2021Publication date: December 16, 2021Inventors: Swati Dhamija, Amir Tamrakar, Nonye M. Alozie, Elizabeth McBride, Ajay Divakaran, Anirudh Som, Sujeong Kim, Bladimir Lopez-Prado
-
Publication number: 20210129748Abstract: An evaluation engine has two or more modules to assist a driver of a vehicle. A driver drowsiness module analyzes monitored features of the driver to recognize two or more levels of drowsiness of the driver of the vehicle. The driver drowsiness module evaluates drowsiness of the driver based on observed body language and facial analysis of the driver. The driver drowsiness module is configured to analyze live multi-modal sensor inputs from sensors against at least one of i) a trained artificial intelligence model and ii) a rules based model while the driver is driving the vehicle to produce an output comprising a driver drowsiness-level estimation. A driver assistance module provides one or more positive assistance mechanisms to the driver to return the driver to be at or above the designated level of drowsiness.Type: ApplicationFiled: December 19, 2017Publication date: May 6, 2021Inventors: Amir Tamrakar, Girish Acharya, Makoto Okabe, John James Bymes
-
Publication number: 20210081056Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.Type: ApplicationFiled: December 1, 2020Publication date: March 18, 2021Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
-
Patent number: 10884503Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.Type: GrantFiled: October 24, 2016Date of Patent: January 5, 2021Assignee: SRI InternationalInventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
-
Patent number: 10789755Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.Type: GrantFiled: December 21, 2018Date of Patent: September 29, 2020Assignee: SRI InternationalInventors: Mohamed R. Amer, Timothy J. Meo, Aswin Nadamuni Raghavan, Alex C. Tozzo, Amir Tamrakar, David A. Salter, Kyung-Yoon Kim
-
Patent number: 10769459Abstract: A method and a system are provided for monitoring driving conditions. The method includes receiving video data comprising video frames from one or more sensors where the video frames may represent an interior or exterior of a vehicle, detecting and recognizing one or more features from the video data where each feature is associated with at least one driving condition, extracting the one or more features from the video data, developing intermediate features by associating and aggregating the extracted features among the extracted features, and developing a semantic meaning for the at least one driving condition by utilizing the intermediate features and the extracted one or more features.Type: GrantFiled: August 30, 2016Date of Patent: September 8, 2020Assignee: SRI InternationalInventors: Amir Tamrakar, Gregory Ho, David Salter, Jihua Huang
-
Publication number: 20190304157Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.Type: ApplicationFiled: December 21, 2018Publication date: October 3, 2019Inventors: Mohamed R. Amer, Timothy J. Meo, Aswin Nadamuni Raghavan, Alex C. Tozzo, Amir Tamrakar, David A. Salter, Kyung-Yoon Kim
-
Patent number: 10268900Abstract: A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.Type: GrantFiled: February 27, 2018Date of Patent: April 23, 2019Assignee: SRI InternationalInventors: Ajay Divakaran, Qian Yu, Amir Tamrakar, Harpreet Singh Sawhney, Jiejie Zhu, Omar Javed, Jingen Liu, Hui Cheng, Jayakrishnan Eledath
-
Patent number: 10198509Abstract: A complex video event classification, search and retrieval system can generate a semantic representation of a video or of segments within the video, based on one or more complex events that are depicted in the video, without the need for manual tagging. The system can use the semantic representations to, among other things, provide enhanced video search and retrieval capabilities.Type: GrantFiled: January 25, 2016Date of Patent: February 5, 2019Assignee: SRI InternationalInventors: Hui Cheng, Harpreet Singh Sawhney, Ajay Divakaran, Qian Yu, Jingen Liu, Amir Tamrakar, Saad Ali, Omar Javed
-
Publication number: 20190034814Abstract: Technologies for analyzing multi-task multimodal data to detect multi-task multimodal events using a deep multi-task representation learning, are disclosed. A combined model with both generative and discriminative aspects is used to share information during both generative and discriminative processes. The technologies can be used to classify data and also to generate data from classification events. The data can then be used to morph data into a desired classification event.Type: ApplicationFiled: March 17, 2017Publication date: January 31, 2019Inventors: Mohamed R. AMER, Timothy J. Shields, Amir TAMRAKAR, Max EHLRICH, Timur ALMAEV
-
Publication number: 20180239975Abstract: A method and a system are provided for monitoring driving conditions. The method includes receiving video data comprising video frames from one or more sensors where the video frames may represent an interior or exterior of a vehicle, detecting and recognizing one or more features from the video data where each feature is associated with at least one driving condition, extracting the one or more features from the video data, developing intermediate features by associating and aggregating the extracted features among the extracted features, and developing a semantic meaning for the at least one driving condition by utilizing the intermediate features and the extracted one or more features.Type: ApplicationFiled: August 30, 2016Publication date: August 23, 2018Inventors: Amir TAMRAKAR, Gregory HO, David SALTER, Jihua HUANG
-
Publication number: 20180189573Abstract: A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.Type: ApplicationFiled: February 27, 2018Publication date: July 5, 2018Inventors: Ajay Divakaran, Qian Yu, Amir Tamrakar, Harpreet Singh Sawhney, Jiejie Zhu, Omar Javed, Jingen Liu, Hui Cheng, Jayakrishnan Eledath
-
Patent number: 9904852Abstract: A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.Type: GrantFiled: May 23, 2014Date of Patent: February 27, 2018Assignee: SRI InternationalInventors: Ajay Divakaran, Qian Yu, Amir Tamrakar, Harpreet Singh Sawhney, Jiejie Zhu, Omar Javed, Jingen Liu, Hui Cheng, Jayakrishnan Eledath
-
Publication number: 20170160813Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.Type: ApplicationFiled: October 24, 2016Publication date: June 8, 2017Applicant: SRI InternationalInventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
-
Publication number: 20160154882Abstract: A complex video event classification, search and retrieval system can generate a semantic representation of a video or of segments within the video, based on one or more complex events that are depicted in the video, without the need for manual tagging. The system can use the semantic representations to, among other things, provide enhanced video search and retrieval capabilities.Type: ApplicationFiled: January 25, 2016Publication date: June 2, 2016Inventors: Hui Cheng, Harpreet Singh Sawhney, Ajay Divakaran, Qian Yu, Jingen Liu, Amir Tamrakar, Saad Ali, Omar Javed
-
Patent number: 9244924Abstract: A complex video event classification, search and retrieval system can generate a semantic representation of a video or of segments within the video, based on one or more complex events that are depicted in the video, without the need for manual tagging. The system can use the semantic representations to, among other things, provide enhanced video search and retrieval capabilities.Type: GrantFiled: January 9, 2013Date of Patent: January 26, 2016Assignee: SRI INTERNATIONALInventors: Hui Cheng, Harpreet Singh Sawhney, Ajay Divakaran, Qian Yu, Jingen Liu, Amir Tamrakar, Saad Ali, Omar Javed
-
Publication number: 20140347475Abstract: A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.Type: ApplicationFiled: May 23, 2014Publication date: November 27, 2014Inventors: Ajay Divakaran, Qian Yu, Amir Tamrakar, Harpreet Singh Sawhney, Jiejie Zhu, Omar Javed, Jingen Liu, Hui Cheng, Jayakrishnan Eledath
-
Publication number: 20130282747Abstract: A complex video event classification, search and retrieval system can generate a semantic representation of a video or of segments within the video, based on one or more complex events that are depicted in the video, without the need for manual tagging. The system can use the semantic representations to, among other things, provide enhanced video search and retrieval capabilities.Type: ApplicationFiled: January 9, 2013Publication date: October 24, 2013Applicant: SRI INTERNATIONALInventors: Hui Cheng, Harpreet Singh Sawhney, Ajay Divakaran, Qian Yu, Jingen Liu, Amir Tamrakar, Saad Ali, Omar Javed