Patents by Inventor MICHAEL JOHN WOLVERTON

MICHAEL JOHN WOLVERTON has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11397462
    Abstract: A computing system includes a vision-based user interface platform to, among other things, analyze multi-modal user interactions, semantically correlate stored knowledge with visual features of a scene depicted in a video, determine relationships between different features of the scene, and selectively display virtual elements on the video depiction of the scene. The analysis of user interactions can be used to filter the information retrieval and correlating of the visual features with the stored knowledge.
    Type: Grant
    Filed: October 8, 2015
    Date of Patent: July 26, 2022
    Assignee: SRI International
    Inventors: Jayakrishnan Eledath, Supun Samarasekera, Harpreet S. Sawhney, Rakesh Kumar, Mayank Bansal, Girish Acharya, Michael John Wolverton, Aaron Spaulding, Ron Krakower
  • Patent number: 10824310
    Abstract: A computing system for virtual personal assistance includes technologies to, among other things, correlate an external representation of an object with a real world view of the object, display virtual elements on the external representation of the object and/or display virtual elements on the real world view of the object, to provide virtual personal assistance in a multi-step activity or another activity that involves the observation or handling of an object and a reference document.
    Type: Grant
    Filed: June 24, 2014
    Date of Patent: November 3, 2020
    Assignee: SRI International
    Inventors: Girish Acharya, Rakesh Kumar, Supun Samarasekara, Louise Yarnall, Michael John Wolverton, Zhiwei Zhu, Ryan Villamil, Vlad Branzoi, James F. Carpenter, Glenn A. Murray
  • Patent number: 10573037
    Abstract: A method and apparatus for training and guiding users comprising generating a scene understanding based on video and audio input of a scene of a user performing a task in the scene, correlating the scene understanding with a knowledge base to produce a task understanding, comprising one or more goals, of a current activity of the user, reasoning, based on the task understanding and a user's current state, a next step for advancing the user towards completing one of the one or more goals of the task understanding and overlaying the scene with an augmented reality view comprising one or more visual and audio representation of the next step to the user.
    Type: Grant
    Filed: December 20, 2012
    Date of Patent: February 25, 2020
    Assignee: SRI International
    Inventors: Rakesh Kumar, Supun Samarasekera, Girish Acharya, Michael John Wolverton, Necip Fazil Ayan, Zhiwei Zhu, Ryan Villamil
  • Publication number: 20160378861
    Abstract: A computing system includes a vision-based user interface platform to, among other things, analyze multi-modal user interactions, semantically correlate stored knowledge with visual features of a scene depicted in a video, determine relationships between different features of the scene, and selectively display virtual elements on the video depiction of the scene. The analysis of user interactions can be used to filter the information retrieval and correlating of the visual features with the stored knowledge.
    Type: Application
    Filed: October 8, 2015
    Publication date: December 29, 2016
    Inventors: Jayakrishnan Eledath, Supun Samarasekera, Harpreet S. Sawhney, Rakesh Kumar, Mayank Bansal, Girish Acharya, Michael John Wolverton, Aaron Spaulding, Ron Krakower
  • Publication number: 20140310595
    Abstract: A computing system for virtual personal assistance includes technologies to, among other things, correlate an external representation of an object with a real world view of the object, display virtual elements on the external representation of the object and/or display virtual elements on the real world view of the object, to provide virtual personal assistance in a multi-step activity or another activity that involves the observation or handling of an object and a reference document.
    Type: Application
    Filed: June 24, 2014
    Publication date: October 16, 2014
    Inventors: Girish Acharya, Rakesh Kumar, Supun Samarasekara, Louise Yarnall, Michael John Wolverton, Zhiwei Zhu, Ryan Villamil, Vlad Braznoi, James F. Carpenter, Glenn A. Murray
  • Publication number: 20140176603
    Abstract: A method and apparatus for training and guiding users comprising generating a scene understanding based on video and audio input of a scene of a user performing a task in the scene, correlating the scene understanding with a knowledge base to produce a task understanding, comprising one or more goals, of a current activity of the user, reasoning, based on the task understanding and a user's current state, a next step for advancing the user towards completing one of the one or more goals of the task understanding and overlaying the scene with an augmented reality view comprising one or more visual and audio representation of the next step to the user.
    Type: Application
    Filed: December 20, 2012
    Publication date: June 26, 2014
    Applicant: SRI INTERNATIONAL
    Inventors: RAKESH KUMAR, SUPUN SAMARASEKERA, GIRISH ACHARYA, MICHAEL JOHN WOLVERTON, NECIP FAZIL AYAN, ZHIWEI ZHU, RYAN VILLAMIL