Patents by Inventor David Jangraw

David Jangraw has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11755108
    Abstract: The present disclosure relates to systems and methods for providing a hybrid brain-computer-interface (hBCI) that can detect an individual's reinforcement signals (e.g., level of interest, arousal, emotional reactivity, cognitive fatigue, cognitive state, or the like) in and/or response to objects, events, and/or actions in an environment by generating reinforcement signals for improving an AI agent controlling the environment, such as an autonomous vehicle. Although the disclosed subject matter is discussed within the context of an autonomous vehicle virtual reality game in the exemplary embodiments of the present disclosure, the disclosed system can be applicable to any other environment in which the human user's sensory input is to be used to influence actions within the environment. Furthermore, the systems and methods disclosed can use neural, physiological, or behavioral signatures to inform deep reinforcement learning based AI systems to enhance user comfort and trust in automation.
    Type: Grant
    Filed: October 2, 2018
    Date of Patent: September 12, 2023
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Paul Sajda, Sameer Saproo, Victor Shih, Sonakshi Bose Roy, David Jangraw
  • Publication number: 20190101985
    Abstract: The present disclosure relates to systems and methods for providing a hybrid brain-computer-interface (hBCI) that can detect an individual's reinforcement signals (e.g., level of interest, arousal, emotional reactivity, cognitive fatigue, cognitive state, or the like) in and/or response to objects, events, and/or actions in an environment by generating reinforcement signals for improving an AI agent controlling the environment, such as an autonomous vehicle. Although the disclosed subject matter is discussed within the context of an autonomous vehicle virtual reality game in the exemplary embodiments of the present disclosure, the disclosed system can be applicable to any other environment in which the human user's sensory input is to be used to influence actions within the environment. Furthermore, the systems and methods disclosed can use neural, physiological, or behavioral signatures to inform deep reinforcement learning based AI systems to enhance user comfort and trust in automation.
    Type: Application
    Filed: October 2, 2018
    Publication date: April 4, 2019
    Applicant: THE TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK
    Inventors: Paul Sajda, Sameer Saproo, Victor Shih, Sonakshi Bose Roy, David Jangraw
  • Patent number: 9665824
    Abstract: Human visual perception is able to recognize a wide range of targets but has limited throughput. Machine vision can process images at a high speed but suffers from inadequate recognition accuracy of general target classes. Systems and methods are provided that combine the strengths of both systems and improve upon existing multimedia processing systems and methods to provide enhanced multimedia labeling, categorization, searching, and navigation.
    Type: Grant
    Filed: October 22, 2013
    Date of Patent: May 30, 2017
    Assignee: THE TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW YORK
    Inventors: Shih-Fu Chang, Jun Wang, Paul Sajda, Eric Pohlmeyer, Barbara Hanna, David Jangraw
  • Publication number: 20140108302
    Abstract: Human visual perception is able to recognize a wide range of targets but has limited throughput. Machine vision can process images at a high speed but suffers from inadequate recognition accuracy of general target classes. Systems and methods are provided that combine the strengths of both systems and improve upon existing multimedia processing systems and methods to provide enhanced multimedia labeling, categorization, searching, and navigation.
    Type: Application
    Filed: October 22, 2013
    Publication date: April 17, 2014
    Inventors: Shih-Fu Chang, Jun Wang, Paul Sajda, Eric Pohlmeyer, Barbara Hanna, David Jangraw
  • Patent number: 8671069
    Abstract: Human visual perception is able to recognize a wide range of targets but has limited throughput. Machine vision can process images at a high speed but suffers from inadequate recognition accuracy of general target classes. Systems and methods are provided that combine the strengths of both systems and improve upon existing multimedia processing systems and methods to provide enhanced multimedia labeling, categorization, searching, and navigation.
    Type: Grant
    Filed: August 8, 2011
    Date of Patent: March 11, 2014
    Assignee: The Trustees of Columbia University, in the city of New York
    Inventors: Shih-Fu Chang, Jun Wang, Paul Sajda, Eric Pohlmeyer, Barbara Hanna, David Jangraw
  • Publication number: 20120089552
    Abstract: Human visual perception is able to recognize a wide range of targets but has limited throughput. Machine vision can process images at a high speed but suffers from inadequate recognition accuracy of general target classes. Systems and methods are provided that combine the strengths of both systems and improve upon existing multimedia processing systems and methods to provide enhanced multimedia labeling, categorization, searching, and navigation.
    Type: Application
    Filed: August 8, 2011
    Publication date: April 12, 2012
    Inventors: Shih-Fu Chang, Jun Wang, Paul Sajda, Eric Pohlmeyer, Barbara Hanna, David Jangraw