Patents by Inventor Mohamed R. Amer
Mohamed R. Amer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11430171Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.Type: GrantFiled: December 21, 2018Date of Patent: August 30, 2022Assignee: SRI INTERNATIONALInventors: Mohamed R. Amer, Timothy J. Meo, Xiao Lin
-
Patent number: 11328206Abstract: Operations of computing devices are managed using one or more deep neural networks (DNNs), which may receive, as DNN inputs, data from sensors, instructions executed by processors, and/or outputs of other DNNs. One or more DNNs, which may be generative, can be applied to the DNN inputs to generate DNN outputs based on relationships between DNN inputs. The DNNs may include DNN parameters learned using one or more computing workloads. The DNN outputs may be, for example, control signals for managing operations of computing devices, predictions for use in generating control signals, warnings indicating an acceptable state is predicted, and/or inputs to one or more neural networks. The signals enhance performance, efficiency, and/or security of one or more of the computing devices. DNNs can be dynamically trained to personalize operations by updating DNN weights or other parameters.Type: GrantFiled: June 16, 2017Date of Patent: May 10, 2022Assignee: SRI InlernationalInventors: Sek M. Chai, David C. Zhang, Mohamed R. Amer, Timothy J. Shields, Aswin Nadamuni Raghavan, Bhaskar Ramamurthy
-
Patent number: 11210836Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.Type: GrantFiled: December 21, 2018Date of Patent: December 28, 2021Assignee: SRI InternationalInventors: Mohamed R. Amer, Xiao Lin
-
Patent number: 10825227Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.Type: GrantFiled: December 21, 2018Date of Patent: November 3, 2020Assignee: SRI InternationalInventors: Mohamed R. Amer, Alex C. Tozzo, Dejan Jovanovic, Timothy J. Meo
-
Patent number: 10789755Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.Type: GrantFiled: December 21, 2018Date of Patent: September 29, 2020Assignee: SRI InternationalInventors: Mohamed R. Amer, Timothy J. Meo, Aswin Nadamuni Raghavan, Alex C. Tozzo, Amir Tamrakar, David A. Salter, Kyung-Yoon Kim
-
Publication number: 20190304156Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.Type: ApplicationFiled: December 21, 2018Publication date: October 3, 2019Inventors: Mohamed R. Amer, Alex C. Tozzo, Dejan Jovanovic, Timothy J. Meo
-
Publication number: 20190304104Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.Type: ApplicationFiled: December 21, 2018Publication date: October 3, 2019Inventors: Mohamed R. Amer, Xiao Lin
-
Publication number: 20190304157Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.Type: ApplicationFiled: December 21, 2018Publication date: October 3, 2019Inventors: Mohamed R. Amer, Timothy J. Meo, Aswin Nadamuni Raghavan, Alex C. Tozzo, Amir Tamrakar, David A. Salter, Kyung-Yoon Kim
-
Publication number: 20190303404Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.Type: ApplicationFiled: December 21, 2018Publication date: October 3, 2019Inventors: Mohamed R. Amer, Timothy J. Meo, Xiao Lin
-
Publication number: 20190034814Abstract: Technologies for analyzing multi-task multimodal data to detect multi-task multimodal events using a deep multi-task representation learning, are disclosed. A combined model with both generative and discriminative aspects is used to share information during both generative and discriminative processes. The technologies can be used to classify data and also to generate data from classification events. The data can then be used to morph data into a desired classification event.Type: ApplicationFiled: March 17, 2017Publication date: January 31, 2019Inventors: Mohamed R. AMER, Timothy J. Shields, Amir TAMRAKAR, Max EHLRICH, Timur ALMAEV
-
Patent number: 9875445Abstract: Technologies for analyzing temporal components of multimodal data to detect short-term multimodal events, determine relationships between short-term multimodal events, and recognize long-term multimodal events, using a deep learning architecture, are disclosed.Type: GrantFiled: February 25, 2015Date of Patent: January 23, 2018Assignee: SRI InternationalInventors: Mohamed R. Amer, Behjat Siddiquie, Ajay Divakaran, Colleen Richey, Saad Khan, Hapreet S. Sawhney, Timothy J. Shields
-
Publication number: 20170364792Abstract: Operations of computing devices are managed using one or more deep neural networks (DNNs), which may receive, as DNN inputs, data from sensors, instructions executed by processors, and/or outputs of other DNNs. One or more DNNs, which may be generative, can be applied to the DNN inputs to generate DNN outputs based on relationships between DNN inputs. The DNNs may include DNN parameters learned using one or more computing workloads. The DNN outputs may be, for example, control signals for managing operations of computing devices, predictions for use in generating control signals, warnings indicating an acceptable state is predicted, and/or inputs to one or more neural networks. The signals enhance performance, efficiency, and/or security of one or more of the computing devices. DNNs can be dynamically trained to personalize operations by updating DNN weights or other parameters.Type: ApplicationFiled: June 16, 2017Publication date: December 21, 2017Inventors: Sek M. Chai, David C. Zhang, Mohamed R. Amer, Timothy J. Shields, Aswin Nadamuni Raghavan, Bhaskar Ramamurthy
-
Publication number: 20160071024Abstract: Technologies for analyzing temporal components of multimodal data to detect short-term multimodal events, determine relationships between short-term multimodal events, and recognize long-term multimodal events, using a deep learning architecture, are disclosed.Type: ApplicationFiled: February 25, 2015Publication date: March 10, 2016Inventors: Mohamed R. Amer, Behjat Siddiquie, Ajay Divakaran, Colleen Richey, Saad Khan, Harpreet S. Sawhney