Patents by Inventor Brian Mount

Brian Mount has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9161012
    Abstract: Optical sensor information captured via one or more optical sensors imaging a scene that includes a human subject is received by a computing device. The optical sensor information is processed by the computing device to model the human subject with a virtual skeleton, and to obtain surface information representing the human subject. The virtual skeleton is transmitted by the computing device to a remote computing device at a higher frame rate than the surface information. Virtual skeleton frames are used by the remote computing device to estimate surface information for frames that have not been transmitted by the computing device.
    Type: Grant
    Filed: November 17, 2011
    Date of Patent: October 13, 2015
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mark Mihelich, Kevin Geisner, Mike Scavezze, Stephen Latta, Daniel McCulloch, Brian Mount
  • Publication number: 20150268821
    Abstract: Various embodiments relating to selection of a user interface object displayed on a graphical user interface based on eye gaze are disclosed. In one embodiment, a selection input may be received. A plurality of eye gaze samples at different times within a time window may be evaluated. The time window may be selected based on a time at which the selection input is detected. A user interface object may be selected based on the plurality of eye gaze samples.
    Type: Application
    Filed: March 20, 2014
    Publication date: September 24, 2015
    Inventors: Scott Ramsby, Tony Ambrus, Michael Scavezze, Abby Lin Lee, Brian Mount, Ian Douglas McIntyre, Aaron Mackay Burns, Russ McMackin, Katelyn Elizabeth Doran, Gerhard Schneider, Quentin Simon Charles Miller
  • Publication number: 20150237336
    Abstract: A method for displaying virtual imagery on a stereoscopic display system having a display matrix. The virtual imagery presents a surface of individually renderable loci viewable to an eye of the user. The method includes, for each locus of the viewable surface, illuminating a pixel of the display matrix. The illuminated pixel is chosen based on a pupil position of the eye as determined by the stereoscopic display system. For each locus of the viewable surface, a virtual image of the illuminated pixel is formed in a plane in front of the eye. The virtual image is positioned on a straight line passing through the locus, the plane, and the pupil position. In this manner, the virtual image tracks change in the user's pupil position.
    Type: Application
    Filed: February 19, 2014
    Publication date: August 20, 2015
    Inventors: Roger Sebastian Sylvan, Arthur Tomlin, Daniel Joseph McCulloch, Brian Mount, Tony Ambrus
  • Patent number: 9092600
    Abstract: Embodiments are disclosed that relate to authenticating a user of a display device. For example, one disclosed embodiment includes displaying one or more virtual images on the display device, wherein the one or more virtual images include a set of augmented reality features. The method further includes identifying one or more movements of the user via data received from a sensor of the display device, and comparing the identified movements of the user to a predefined set of authentication information for the user that links user authentication to a predefined order of the augmented reality features. If the identified movements indicate that the user selected the augmented reality features in the predefined order, then the user is authenticated, and if the identified movements indicate that the user did not select the augmented reality features in the predefined order, then the user is not authenticated.
    Type: Grant
    Filed: November 5, 2012
    Date of Patent: July 28, 2015
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mike Scavezze, Jason Scott, Jonathan Steed, Ian McIntyre, Aaron Krauss, Daniel McCulloch, Stephen Latta, Kevin Geisner, Brian Mount
  • Patent number: 9041739
    Abstract: Embodiments for matching participants in a virtual multiplayer entertainment experience are provided. For example, one embodiment provides a method including receiving from each user of a plurality of users a request to join the virtual multiplayer entertainment experience, receiving from each user of the plurality of users information regarding characteristics of a physical space in which each user is located, and matching two or more users of the plurality of users for participation in the virtual multiplayer entertainment experience based on the characteristics of the physical space of each of the two or more users.
    Type: Grant
    Filed: January 31, 2012
    Date of Patent: May 26, 2015
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Stephen Latta, Kevin Geisner, Brian Mount, Daniel McCulloch, Cameron Brown, Jeffrey Alan Kohler, Wei Zhang, Ryan Hastings, Darren Bennett, Ian McIntyre
  • Patent number: 8957858
    Abstract: Systems and methods for multi-platform motion interactivity, is provided. The system includes a motion-sensing subsystem, a display subsystem including a display, a logic subsystem, and a data-holding subsystem containing instructions executable by the logic subsystem. The system configured to display a displayed scene on the display; receive a dynamically-changing motion input from the motion-sensing subsystem that is generated in response to movement of a tracked object; generate, in real time, a dynamically-changing 3D spatial model of the tracked object based on the motion input; control, based on the movement of the tracked object and using the 3D spatial model, motion within the displayed scene. The system further configured to receive, from a secondary computing system, a secondary input; and control the displayed scene in response to the secondary input to visually represent interaction between the motion input and the secondary input.
    Type: Grant
    Filed: May 27, 2011
    Date of Patent: February 17, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dan Osborn, Christopher Willoughby, Brian Mount, Vaibhav Goel, Tim Psiaki, Shawn C. Wright, Christopher Vuchetich
  • Publication number: 20150035832
    Abstract: A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment.
    Type: Application
    Filed: October 22, 2014
    Publication date: February 5, 2015
    Inventors: Ben Sugden, Darren Bennett, Brian Mount, Sebastian Sylvan, Arthur Tomlin, Ryan Hastings, Daniel McCulloch, Kevin Geisner, Robert Crocco
  • Patent number: 8929612
    Abstract: A system and method are disclosed relating to a pipeline for generating a computer model of a target user, including a hand model of the user's hands, captured by an image sensor in a NUI system. The computer model represents a best estimate of the position of a user's hand or hands and whether the hand or hand is in an open or closed state. The generated hand model may be used by a gaming or other application to determine such things as user gestures and control actions.
    Type: Grant
    Filed: November 18, 2011
    Date of Patent: January 6, 2015
    Assignee: Microsoft Corporation
    Inventors: Anthony Ambrus, Kyungsuk David Lee, Andrew Campbell, David Haley, Brian Mount, Albert Robles, Daniel Osborn, Shawn Wright, Nahil Sharkasi, Dave Hill, Daniel McCulloch, Alexandru Balan
  • Patent number: 8894484
    Abstract: A system and related methods for inviting a potential player to participate in a multiplayer game via a user head-mounted display device are provided. In one example, a potential player invitation program receives user voice data and determines that the user voice data is an invitation to participate in a multiplayer game. The program receives eye-tracking information, depth information, facial recognition information, potential player head-mounted display device information, and/or potential player voice data. The program associates the invitation with the potential player using the eye-tracking information, the depth information, the facial recognition information, the potential player head-mounted display device information, and/or the potential player voice data. The program matches a potential player account with the potential player. The program receives an acceptance response from the potential player, and joins the potential player account with a user account in participating in the multiplayer game.
    Type: Grant
    Filed: January 30, 2012
    Date of Patent: November 25, 2014
    Assignee: Microsoft Corporation
    Inventors: Stephen Latta, Kevin Geisner, Brian Mount, Jonathan Steed, Tony Ambrus, Arnulfo Zepeda, Aaron Krauss
  • Patent number: 8897491
    Abstract: A system and method are disclosed relating to a pipeline for generating a computer model of a target user, including a hand model of the user's hands and fingers, captured by an image sensor in a NUI system. The computer model represents a best estimate of the position and orientation of a user's hand or hands. The generated hand model may be used by a gaming or other application to determine such things as user gestures and control actions.
    Type: Grant
    Filed: October 19, 2011
    Date of Patent: November 25, 2014
    Assignee: Microsoft Corporation
    Inventors: Anthony Ambrus, Kyungsuk David Lee, Andrew Campbell, David Haley, Brian Mount, Albert Robles, Daniel Osborn, Shawn Wright, Nahil Sharkasi, Dave Hill, Daniel McCulloch
  • Publication number: 20140333666
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a display system. For example, one disclosed embodiment includes displaying a virtual object via the display system as free-floating, detecting a trigger to display the object as attached to a surface, and, in response to the trigger, displaying the virtual object as attached to the surface via the display system. The method may further include detecting a trigger to detach the virtual object from the surface and, in response to the trigger to detach the virtual object from the surface, detaching the virtual object from the surface and displaying the virtual object as free-floating.
    Type: Application
    Filed: May 13, 2013
    Publication date: November 13, 2014
    Inventors: Adam G. Poulos, Evan Michael Keibler, Arthur Tomlin, Cameron Brown, Daniel McCulloch, Brian Mount, Dan Kroymann, Gregory Lowell Alt
  • Publication number: 20140333665
    Abstract: Embodiments are disclosed that relate to calibrating a predetermined eye location in a head-mounted display. For example, in one disclosed embodiment a method includes displaying a virtual marker visually alignable with a real world target at an alignment condition. At the alignment condition, image data is acquired to determine a location of the real world target. From the image data, an estimated eye location relative to a location of the head-mounted display is determined. Based upon the estimated eye location, the predetermined eye location is then calibrated.
    Type: Application
    Filed: May 10, 2013
    Publication date: November 13, 2014
    Inventors: Roger Sebastian Sylvan, Adam Poulos, Michael Scavezze, Stephen Latta, Arthur Tomlin, Brian Mount, Aaron Krauss
  • Publication number: 20140320389
    Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.
    Type: Application
    Filed: April 29, 2013
    Publication date: October 30, 2014
    Inventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger
  • Patent number: 8872853
    Abstract: A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment.
    Type: Grant
    Filed: December 1, 2011
    Date of Patent: October 28, 2014
    Assignee: Microsoft Corporation
    Inventors: Ben Sugden, Darren Bennett, Brian Mount, Sebastian Sylvan, Arthur Tomlin, Ryan Hastings, Daniel McCulloch, Kevin Geisner, Robert Crocco, Jr.
  • Publication number: 20140240351
    Abstract: Embodiments that relate to providing motion amplification to a virtual environment are disclosed. For example, in one disclosed embodiment a mixed reality augmentation program receives from a head-mounted display device motion data that corresponds to motion of a user in a physical environment. The program presents via the display device the virtual environment in motion in a principal direction, with the principal direction motion being amplified by a first multiplier as compared to the motion of the user in a corresponding principal direction. The program also presents the virtual environment in motion in a secondary direction, where the secondary direction motion is amplified by a second multiplier as compared to the motion of the user in a corresponding secondary direction, and the second multiplier is less than the first multiplier.
    Type: Application
    Filed: February 27, 2013
    Publication date: August 28, 2014
    Inventors: Michael Scavezze, Nicholas Gervase Fajt, Arnulfo Zepeda Navratil, Jason Scott, Adam Benjamin Smith-Kipnis, Brian Mount, John Bevis, Cameron Brown, Tony Ambrus, Phillip Charles Heckinger, Dan Kroymann, Matthew G. Kaplan, Aaron Krauss
  • Publication number: 20140145914
    Abstract: A system and related methods for a resource management in a head-mounted display device are provided. In one example, the head-mounted display device includes a plurality of sensors and a display system for presenting holographic objects. A resource management program is configured to operate a selected sensor in a default power mode to achieve a selected fidelity. The program receives user-related information from one or more of the sensors, and determines whether target information is detected. Where target information is detected, the program adjusts the selected sensor to operate in a reduced power mode that uses less power than the default power mode.
    Type: Application
    Filed: November 29, 2012
    Publication date: May 29, 2014
    Inventors: Stephen Latta, Jedd Anthony Perry, Rod G. Fleck, Jack Clevenger, Frederik Schaffalitzky, Drew Steedly, Daniel McCulloch, Ian McIntyre, Alexandru Balan, Ben Sugden, Ryan Hastings, Brian Mount
  • Publication number: 20140125668
    Abstract: Embodiments related to efficiently constructing an augmented reality environment with global illumination effects are disclosed. For example, one disclosed embodiment provides a method of displaying an augmented reality image via a display device. The method includes receiving image data, the image data capturing an image of a local environment of the display device, and identifying a physical feature of the local environment via the image data. The method further includes constructing an augmented reality image of a virtual structure for display over the physical feature in spatial registration with the physical feature from a viewpoint of a user, the augmented reality image comprising a plurality of modular virtual structure segments arranged in adjacent locations to form the virtual structure feature, each modular virtual structure segment comprising a pre-computed global illumination effect, and outputting the augmented reality image to the display device.
    Type: Application
    Filed: November 5, 2012
    Publication date: May 8, 2014
    Inventors: Jonathan Steed, Aaron Krauss, Mike Scavezze, Wei Zhang, Arthur Tomlin, Tony Ambrus, Brian Mount, Stephen Latta, Ryan Hastings
  • Publication number: 20140125574
    Abstract: Embodiments are disclosed that relate to authenticating a user of a display device. For example, one disclosed embodiment includes displaying one or more virtual images on the display device, wherein the one or more virtual images include a set of augmented reality features. The method further includes identifying one or more movements of the user via data received from a sensor of the display device, and comparing the identified movements of the user to a predefined set of authentication information for the user that links user authentication to a predefined order of the augmented reality features. If the identified movements indicate that the user selected the augmented reality features in the predefined order, then the user is authenticated, and if the identified movements indicate that the user did not select the augmented reality features in the predefined order, then the user is not authenticated.
    Type: Application
    Filed: November 5, 2012
    Publication date: May 8, 2014
    Inventors: Mike Scavezze, Jason Scott, Jonathan Steed, Ian McIntyre, Aaron Krauss, Daniel McCulloch, Stephen Latta, Kevin Geisner, Brian Mount
  • Publication number: 20140049558
    Abstract: Embodiments for providing instructional information for control devices are disclosed. In one example, a method on a see-through display device comprising a see-through display and an outward-facing image sensor includes acquiring an image of a scene viewable through the see-through display and detecting a control device in the scene. The method also includes retrieving information pertaining to a function of an interactive element of the control device and displaying an image on the see-through display augmenting an appearance of the interactive element of the control device with image data related to the function of the interactive element.
    Type: Application
    Filed: August 14, 2012
    Publication date: February 20, 2014
    Inventors: Aaron Krauss, Stephen Latta, Mike Scavezze, Daniel McCulloch, Brian Mount, Kevin Geisner
  • Publication number: 20130342568
    Abstract: Embodiments related to providing low light scene augmentation are disclosed. One embodiment provides, on a computing device comprising a see-through display device, a method including recognizing, from image data received from an image sensor, a background scene of an environment viewable through the see-through display device, the environment comprising a physical object. The method further includes identifying one or more geometrical features of the physical object and displaying, on the see through display device, an image augmenting the one or more geometrical features.
    Type: Application
    Filed: June 20, 2012
    Publication date: December 26, 2013
    Inventors: Tony Ambrus, Mike Scavezze, Stephen Latta, Daniel McCulloch, Brian Mount