Patents by Inventor Mohamed R. Amer

Mohamed R. Amer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11430171
    Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: August 30, 2022
    Assignee: SRI INTERNATIONAL
    Inventors: Mohamed R. Amer, Timothy J. Meo, Xiao Lin
  • Patent number: 11328206
    Abstract: Operations of computing devices are managed using one or more deep neural networks (DNNs), which may receive, as DNN inputs, data from sensors, instructions executed by processors, and/or outputs of other DNNs. One or more DNNs, which may be generative, can be applied to the DNN inputs to generate DNN outputs based on relationships between DNN inputs. The DNNs may include DNN parameters learned using one or more computing workloads. The DNN outputs may be, for example, control signals for managing operations of computing devices, predictions for use in generating control signals, warnings indicating an acceptable state is predicted, and/or inputs to one or more neural networks. The signals enhance performance, efficiency, and/or security of one or more of the computing devices. DNNs can be dynamically trained to personalize operations by updating DNN weights or other parameters.
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: May 10, 2022
    Assignee: SRI Inlernational
    Inventors: Sek M. Chai, David C. Zhang, Mohamed R. Amer, Timothy J. Shields, Aswin Nadamuni Raghavan, Bhaskar Ramamurthy
  • Patent number: 11210836
    Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: December 28, 2021
    Assignee: SRI International
    Inventors: Mohamed R. Amer, Xiao Lin
  • Patent number: 10825227
    Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: November 3, 2020
    Assignee: SRI International
    Inventors: Mohamed R. Amer, Alex C. Tozzo, Dejan Jovanovic, Timothy J. Meo
  • Patent number: 10789755
    Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: September 29, 2020
    Assignee: SRI International
    Inventors: Mohamed R. Amer, Timothy J. Meo, Aswin Nadamuni Raghavan, Alex C. Tozzo, Amir Tamrakar, David A. Salter, Kyung-Yoon Kim
  • Publication number: 20190304156
    Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.
    Type: Application
    Filed: December 21, 2018
    Publication date: October 3, 2019
    Inventors: Mohamed R. Amer, Alex C. Tozzo, Dejan Jovanovic, Timothy J. Meo
  • Publication number: 20190304104
    Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.
    Type: Application
    Filed: December 21, 2018
    Publication date: October 3, 2019
    Inventors: Mohamed R. Amer, Xiao Lin
  • Publication number: 20190304157
    Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.
    Type: Application
    Filed: December 21, 2018
    Publication date: October 3, 2019
    Inventors: Mohamed R. Amer, Timothy J. Meo, Aswin Nadamuni Raghavan, Alex C. Tozzo, Amir Tamrakar, David A. Salter, Kyung-Yoon Kim
  • Publication number: 20190303404
    Abstract: This disclosure describes techniques that include generating, based on a description of a scene, a movie or animation that represents at least one possible version of a story corresponding to the description of the scene. This disclosure also describes techniques for training a machine learning model to generate predefined data structures from textual information, visual information, and/or other information about a story, an event, a scene, or a sequence of events or scenes within a story. This disclosure also describes techniques for using GANs to generate, from input, an animation of motion (e.g., an animation or a video clip). This disclosure also describes techniques for implementing an explainable artificial intelligence system that may provide end users with information (e.g., through a user interface) that enables an understanding of at least some of the decisions made by the AI system.
    Type: Application
    Filed: December 21, 2018
    Publication date: October 3, 2019
    Inventors: Mohamed R. Amer, Timothy J. Meo, Xiao Lin
  • Publication number: 20190034814
    Abstract: Technologies for analyzing multi-task multimodal data to detect multi-task multimodal events using a deep multi-task representation learning, are disclosed. A combined model with both generative and discriminative aspects is used to share information during both generative and discriminative processes. The technologies can be used to classify data and also to generate data from classification events. The data can then be used to morph data into a desired classification event.
    Type: Application
    Filed: March 17, 2017
    Publication date: January 31, 2019
    Inventors: Mohamed R. AMER, Timothy J. Shields, Amir TAMRAKAR, Max EHLRICH, Timur ALMAEV
  • Patent number: 9875445
    Abstract: Technologies for analyzing temporal components of multimodal data to detect short-term multimodal events, determine relationships between short-term multimodal events, and recognize long-term multimodal events, using a deep learning architecture, are disclosed.
    Type: Grant
    Filed: February 25, 2015
    Date of Patent: January 23, 2018
    Assignee: SRI International
    Inventors: Mohamed R. Amer, Behjat Siddiquie, Ajay Divakaran, Colleen Richey, Saad Khan, Hapreet S. Sawhney, Timothy J. Shields
  • Publication number: 20170364792
    Abstract: Operations of computing devices are managed using one or more deep neural networks (DNNs), which may receive, as DNN inputs, data from sensors, instructions executed by processors, and/or outputs of other DNNs. One or more DNNs, which may be generative, can be applied to the DNN inputs to generate DNN outputs based on relationships between DNN inputs. The DNNs may include DNN parameters learned using one or more computing workloads. The DNN outputs may be, for example, control signals for managing operations of computing devices, predictions for use in generating control signals, warnings indicating an acceptable state is predicted, and/or inputs to one or more neural networks. The signals enhance performance, efficiency, and/or security of one or more of the computing devices. DNNs can be dynamically trained to personalize operations by updating DNN weights or other parameters.
    Type: Application
    Filed: June 16, 2017
    Publication date: December 21, 2017
    Inventors: Sek M. Chai, David C. Zhang, Mohamed R. Amer, Timothy J. Shields, Aswin Nadamuni Raghavan, Bhaskar Ramamurthy
  • Publication number: 20160071024
    Abstract: Technologies for analyzing temporal components of multimodal data to detect short-term multimodal events, determine relationships between short-term multimodal events, and recognize long-term multimodal events, using a deep learning architecture, are disclosed.
    Type: Application
    Filed: February 25, 2015
    Publication date: March 10, 2016
    Inventors: Mohamed R. Amer, Behjat Siddiquie, Ajay Divakaran, Colleen Richey, Saad Khan, Harpreet S. Sawhney