Patents by Inventor Jonathan T Steed

Jonathan T Steed has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20130083011
    Abstract: Technology is described for representing a physical location at a previous time period with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. The personal A/V apparatus is identified as being within the physical location, and one or more objects in a display field of view of the near-eye, augmented reality display are automatically identified based on a three dimensional mapping of objects in the physical location. User input, which may be natural user interface (NUI) input, indicates a previous time period, and one or more 3D virtual objects associated with the previous time period are displayed from a user perspective associated with the display field of view. An object may be erased from the display field of view, and a camera effect may be applied when changing between display fields of view.
    Type: Application
    Filed: June 27, 2012
    Publication date: April 4, 2013
    Inventors: Kevin A. Geisner, Kathryn Stone Perez, Stephen G. Latta, Ben J. Sugden, Benjamin I. Vaught, Alex Aben-Athar Kipman, John Clavin, Jonathan T. Steed, Jason Scott
  • Patent number: 8334842
    Abstract: Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and evaluates the person's intent to engage with the application. Factors such as location, stance, movement and voice data can be evaluated. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another. Voice data can include volume as well as words which are detected by speech recognition.
    Type: Grant
    Filed: January 15, 2010
    Date of Patent: December 18, 2012
    Assignee: Microsoft Corporation
    Inventors: Relja Markovic, Stephen G Latta, Kevin A Geisner, Jonathan T Steed, Darren A Bennett, Amos D Vance
  • Publication number: 20120165096
    Abstract: A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game.
    Type: Application
    Filed: March 2, 2012
    Publication date: June 28, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Kevin Geisner, Relja Markovic, Stephen G. Latta, Mark T. Mihelich, Christopher Willoughby, Jonathan T. Steed, Darren Bennett, Shawn C. Wright, Matt Coohill
  • Publication number: 20110223995
    Abstract: A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game.
    Type: Application
    Filed: March 12, 2010
    Publication date: September 15, 2011
    Inventors: Kevin Geisner, Relja Markovic, Stephen G. Latta, Mark T. Mihelich, Christopher Willoughby, Jonathan T. Steed, Darren Bennett, Shawn C. Wright, Matt Coohill
  • Publication number: 20110175810
    Abstract: Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and evaluates the person's intent to engage with the application. Factors such as location, stance, movement and voice data can be evaluated. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another. Voice data can include volume as well as words which are detected by speech recognition.
    Type: Application
    Filed: January 15, 2010
    Publication date: July 21, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, Jonathan T. Steed, Darren A. Bennett, Amos D. Vance