Patents by Inventor Brian J. Mount

Brian J. Mount has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150234475
    Abstract: Methods for recognizing gestures using adaptive multi-sensor gesture recognition are described. In some embodiments, a gesture recognition system receives a plurality of sensor inputs from a plurality of sensor devices and a plurality of confidence thresholds associated with the plurality of sensor inputs. A confidence threshold specifies a minimum confidence value for which it is deemed that a particular gesture has occurred. Upon detection of a compensating event, such as excessive motion involving one of the plurality of sensor devices, the gesture recognition system may modify the plurality of confidence thresholds based on the compensating event. Subsequently, the gesture recognition system generates a multi-sensor confidence value based on whether at least a subset of the plurality of confidence thresholds has been satisfied. The gesture recognition system may also modify the plurality of confidence thresholds based on the plugging and unplugging of sensor inputs from the gesture recognition system.
    Type: Application
    Filed: May 4, 2015
    Publication date: August 20, 2015
    Inventors: Stephen G. Latta, Brian J. Mount, Adam G. Poulos, Jeffrey A. Kohler, Arthur C. Tomlin, Jonathan T. Steed
  • Publication number: 20150212576
    Abstract: Methods for enabling hands-free selection of objects within an augmented reality environment are described. In some embodiments, an object may be selected by an end user of a head-mounted display device (HMD) based on detecting a vestibulo-ocular reflex (VOR) with the end user's eyes while the end user is gazing at the object and performing a particular head movement for selecting the object. The object selected may comprise a real object or a virtual object. The end user may select the object by gazing at the object for a first time period and then performing a particular head movement in which the VOR is detected for one or both of the end user's eyes. In one embodiment, the particular head movement may involve the end user moving their head away from a direction of the object at a particular head speed while gazing at the object.
    Type: Application
    Filed: January 28, 2014
    Publication date: July 30, 2015
    Inventors: Anthony J. Ambrus, Adam G. Poulos, Lewey Alec Geselowitz, Dan Kroymann, Arthur C. Tomlin, Roger Sebastian-Kevin Sylvan, Mathew J. Lamb, Brian J. Mount
  • Patent number: 9053483
    Abstract: A system provides a recommendation of food items to a user based on nutritional preferences of the user, using a head-mounted display device (HMDD) worn by the user. In a store, a forward-facing camera of the HMDD captures an image of a food item. The food item can be identified by the image, such as based on packaging of the food item. Nutritional parameters of the food item are compared to nutritional preferences of the user to determine whether the food item is recommended. The HMDD displays an augmented reality image to the user indicating whether the food item is recommended. If the food item is not recommended, a substitute food item can be identified. The nutritional preferences can indicate food allergies, preferences for low calorie foods and so forth. In a restaurant, the HMDD can recommend menu selections for a user.
    Type: Grant
    Filed: March 30, 2012
    Date of Patent: June 9, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin A Geisner, Kathryn Stonw Perez, Stephen G Latta, Ben J Sugden, Benjamin I Vaught, Alex Aben-Athar Kipman, Cameron G Brown, Holly A Hirzel, Brian J Mount, Daniel McCulloch
  • Patent number: 9041622
    Abstract: Technology is described for controlling a virtual object displayed by a near-eye, augmented reality display with a real controller device. User input data is received from a real controller device requesting an action to be performed by the virtual object. A user perspective of the virtual object being displayed by the near-eye, augmented reality display is determined. The user input data requesting the action to be performed by the virtual object is applied based on the user perspective, and the action is displayed from the user perspective. The virtual object to be controlled by the real controller device may be identified based on user input data which may be from a natural user interface (NUI). A user selected force feedback object may also be identified, and the identification may also be based on NUI input data.
    Type: Grant
    Filed: June 12, 2012
    Date of Patent: May 26, 2015
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Daniel J. McCulloch, Arnulfo Zepeda Navratil, Jonathan T. Steed, Ryan L. Hastings, Jason Scott, Brian J. Mount, Holly A. Hirzel, Darren Bennett, Michael J. Scavezze
  • Patent number: 9030408
    Abstract: Methods for recognizing gestures using adaptive multi-sensor gesture recognition are described. In some embodiments, a gesture recognition system receives a plurality of sensor inputs from a plurality of sensor devices and a plurality of confidence thresholds associated with the plurality of sensor inputs. A confidence threshold specifies a minimum confidence value for which it is deemed that a particular gesture has occurred. Upon detection of a compensating event, such as excessive motion involving one of the plurality of sensor devices, the gesture recognition system may modify the plurality of confidence thresholds based on the compensating event. Subsequently, the gesture recognition system generates a multi-sensor confidence value based on whether at least a subset of the plurality of confidence thresholds has been satisfied. The gesture recognition system may also modify the plurality of confidence thresholds based on the plugging and unplugging of sensor inputs from the gesture recognition system.
    Type: Grant
    Filed: November 29, 2012
    Date of Patent: May 12, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephen G. Latta, Brian J. Mount, Adam G. Poulos, Jeffrey A. Kohler, Arthur C. Tomlin, Jonathan T. Steed
  • Patent number: 8965741
    Abstract: A system for generating and updating a 3D model of a structure as the structure is being constructed or modified is described. The structure may comprise a building or non-building structure such as a bridge, parking garage, or roller coaster. The 3D model may include virtual objects depicting physical components or other construction elements of the structure. Each construction element may be associated with physical location information that may be analyzed over time in order to detect movement of the construction element and to predict when movement of the construction element may cause a code or regulation to be violated. In some cases, a see-through HMD may be utilized by a construction worker while constructing or modifying a structure in order to verify that the placement of a construction element complies with various building codes or regulations in real-time.
    Type: Grant
    Filed: April 24, 2012
    Date of Patent: February 24, 2015
    Assignee: Microsoft Corporation
    Inventors: Daniel J. McCulloch, Ryan L. Hastings, Jason Scott, Holly A. Hirzel, Brian J. Mount
  • Patent number: 8933912
    Abstract: A system and method are disclosed for providing a touch interface for electronic devices. The touch interface can be any surface. As one example, a table top can be used as a touch sensitive interface. In one embodiment, the system determines a touch region of the surface, and correlates that touch region to a display of an electronic device for which input is provided. The system may have a 3D camera that identifies the relative position of a user's hands to the touch region to allow for user input. Note that the user's hands do not occlude the display. The system may render a representation of the user's hand on the display in order for the user to interact with elements on the display screen.
    Type: Grant
    Filed: April 2, 2012
    Date of Patent: January 13, 2015
    Assignee: Microsoft Corporation
    Inventors: Anthony J. Ambrus, Abdulwajid N. Mohamed, Andrew D. Wilson, Brian J. Mount, Jordan D. Andersen
  • Publication number: 20150007114
    Abstract: Technology is described for web-like hierarchical menu interface which displays a menu in a web-like hierarchical menu display configuration in a near-eye display (NED). The web-like hierarchical menu display configuration links menu levels and menu items within a menu level with flexible spatial dimensions for menu elements. One or more processors executing the interface select a web-like hierarchical menu display configuration based on the available menu space and user head view direction determined from a 3D mapping of the NED field of view data and stored user head comfort rules. Activation parameters in menu item selection criteria are adjusted to be user specific based on user head motion data tracked based on data from one or more sensors when the user wears the NED. Menu display layout may be triggered by changes in head view direction of the user and available menu space about the user's head.
    Type: Application
    Filed: June 28, 2013
    Publication date: January 1, 2015
    Inventors: Adam G. Poulos, Anthony J. Ambrus, Cameron G. Brown, Jason Scott, Brian J. Mount, Daniel J. McCulloch, John Bevis, Wei Zhang
  • Publication number: 20150002507
    Abstract: Technology is described for (3D) space carving of a user environment based on movement through the user environment of one or more users wearing a near-eye display (NED) system. One or more sensors on the near-eye display (NED) system provide sensor data from which a distance and direction of movement can be determined. Spatial dimensions for a navigable path can be represented based on user height data and user width data of the one or more users who have traversed the path. Space carving data identifying carved out space can be stored in a 3D space carving model of the user environment. The navigable paths can also be related to position data in another kind of 3D mapping like a 3D surface reconstruction mesh model of the user environment generated from depth images.
    Type: Application
    Filed: June 28, 2013
    Publication date: January 1, 2015
    Inventors: Anthony J. Ambrus, Jea Gon Park, Adam G. Poulos, Justin Avram Clark, Michael Jason Gourlay, Brian J. Mount, Daniel J. McCulloch, Arthur C. Tomlin
  • Publication number: 20140306994
    Abstract: Methods for generating and displaying personalized virtual billboards within an augmented reality environment are described. The personalized virtual billboards may facilitate the sharing of personalized information between persons within an environment who have varying degrees of acquaintance (e.g., ranging from close familial relationships to strangers). In some embodiments, a head-mounted display device (HMD) may detect a mobile device associated with a particular person within an environment, acquire a personalized information set corresponding with the particular person, generate a virtual billboard based on the personalized information set, and display the virtual billboard on the HMD. The personalized information set may include information associated with the particular person such as shopping lists and classified advertisements.
    Type: Application
    Filed: April 12, 2013
    Publication date: October 16, 2014
    Inventors: Cameron G. Brown, Abby Lee, Brian J. Mount, Daniel J. McCulloch, Michael J. Scavezze, Ryan L. Hastings, John Bevis, Mike Thomas, Ron Amador-Leon
  • Publication number: 20140306993
    Abstract: Methods for positioning virtual objects within an augmented reality environment using snap grid spaces associated with real-world environments, real-world objects, and/or virtual objects within the augmented reality environment are described. A snap grid space may comprise a two-dimensional or three-dimensional virtual space within an augmented reality environment in which one or more virtual objects may be positioned. In some embodiments, a head-mounted display device (HMD) may identify one or more grid spaces within an augmented reality environment, detect a positioning of a virtual object within the augmented reality environment, determine a target grid space of the one or more grid spaces in which to position the virtual object, determine a position of the virtual object within the target grid space, and display the virtual object within the augmented reality environment based on the position of the virtual object within the target grid space.
    Type: Application
    Filed: April 12, 2013
    Publication date: October 16, 2014
    Inventors: Adam G. Poulos, Jason Scott, Matthew Kaplan, Christopher Obeso, Cameron G. Brown, Daniel J. McCulloch, Abby Lee, Brian J. Mount, Ben J. Sugden
  • Patent number: 8752963
    Abstract: The technology provides various embodiments for controlling brightness of a see-through, near-eye mixed display device based on light intensity of what the user is gazing at. The opacity of the display can be altered, such that external light is reduced if the wearer is looking at a bright object. The wearer's pupil size may be determined and used to adjust the brightness used to display images, as well as the opacity of the display. A suitable balance between opacity and brightness used to display images may be determined that allows real and virtual objects to be seen clearly, while not causing damage or discomfort to the wearer's eyes.
    Type: Grant
    Filed: November 4, 2011
    Date of Patent: June 17, 2014
    Assignee: Microsoft Corporation
    Inventors: Daniel J. McCulloch, Ryan L. Hastings, Kevin A. Geisner, Robert L. Crocco, Alexandru O. Balan, Derek L. Knee, Michael J. Scavezze, Stephen G. Latta, Brian J. Mount
  • Publication number: 20140160157
    Abstract: Methods for generating and displaying people-triggered holographic reminders are described. In some embodiments, a head-mounted display device (HMD) generates and displays an augmented reality environment to an end user of the HMD in which reminders associated with a particular person may be displayed if the particular person is within a field of view of the HMD or if the particular person is within a particular distance of the HMD. The particular person may be identified individually or identified as belonging to a particular group (e.g., a member of a group with a particular job title such as programmer or administrator). In some cases, a completion of a reminder may be automatically detected by applying speech recognition techniques (e.g., to identify key words, phrases, or names) to captured audio of a conversation occurring between the end user and the particular person.
    Type: Application
    Filed: December 11, 2012
    Publication date: June 12, 2014
    Inventors: Adam G. Poulos, Holly A. Hirzel, Anthony J. Ambrus, Daniel J. McCulloch, Brian J. Mount, Jonathan T. Steed
  • Publication number: 20140002444
    Abstract: Technology is described for automatically determining placement of one or more interaction zones in an augmented reality environment in which one or more virtual features are added to a real environment. An interaction zone includes at least one virtual feature and is associated with a space within the augmented reality environment with boundaries of the space determined based on the one or more real environment features. A plurality of activation criteria may be available for an interaction zone and at least one may be selected based on at least one real environment feature. The technology also describes controlling activation of an interaction zone within the augmented reality environment. In some examples, at least some behavior of a virtual object is controlled by emergent behavior criteria which defines an action independently from a type of object in the real world environment.
    Type: Application
    Filed: June 29, 2012
    Publication date: January 2, 2014
    Inventors: Darren Bennett, Brian J. Mount, Michael J. Scavezze, Daniel J. McCulloch, Anthony J. Ambrus, Jonathan T. Steed, Arthur C. Tomlin, Kevin A. Geisner
  • Publication number: 20130328763
    Abstract: Methods for recognizing gestures using adaptive multi-sensor gesture recognition are described. In some embodiments, a gesture recognition system receives a plurality of sensor inputs from a plurality of sensor devices and a plurality of confidence thresholds associated with the plurality of sensor inputs. A confidence threshold specifies a minimum confidence value for which it is deemed that a particular gesture has occurred. Upon detection of a compensating event, such as excessive motion involving one of the plurality of sensor devices, the gesture recognition system may modify the plurality of confidence thresholds based on the compensating event. Subsequently, the gesture recognition system generates a multi-sensor confidence value based on whether at least a subset of the plurality of confidence thresholds has been satisfied. The gesture recognition system may also modify the plurality of confidence thresholds based on the plugging and unplugging of sensor inputs from the gesture recognition system.
    Type: Application
    Filed: November 29, 2012
    Publication date: December 12, 2013
    Inventors: Stephen G. Latta, Brian J. Mount, Adam G. Poulos, Jeffrey A. Kohler, Arthur C. Tomlin, Jonathan T. Steed
  • Publication number: 20130328927
    Abstract: A system for generating a virtual gaming environment based on features identified within a real-world environment, and adapting the virtual gaming environment over time as the features identified within the real-world environment change is described. Utilizing the technology described, a person wearing a head-mounted display device (HMD) may walk around a real-world environment and play a virtual game that is adapted to that real-world environment. For example, the HMD may identify environmental features within a real-world environment such as five grassy areas and two cars, and then spawn virtual monsters based on the location and type of the environmental features identified. The location and type of the environmental features identified may vary depending on the particular real-world environment in which the HMD exists and therefore each virtual game may look different depending on the particular real-world environment.
    Type: Application
    Filed: November 29, 2012
    Publication date: December 12, 2013
    Inventors: Brian J. Mount, Jason Scott, Ryan L. Hastings, Darren Bennett, Stephen G. Latta, Daniel J. McCulloch, Kevin A. Geisner, Jonathan T. Steed, Michael J. Scavezze
  • Publication number: 20130328762
    Abstract: Technology is described for controlling a virtual object displayed by a near-eye, augmented reality display with a real controller device. User input data is received from a real controller device requesting an action to be performed by the virtual object. A user perspective of the virtual object being displayed by the near-eye, augmented reality display is determined. The user input data requesting the action to be performed by the virtual object is applied based on the user perspective, and the action is displayed from the user perspective. The virtual object to be controlled by the real controller device may be identified based on user input data which may be from a natural user interface (NUI). A user selected force feedback object may also be identified, and the identification may also be based on NUI input data.
    Type: Application
    Filed: June 12, 2012
    Publication date: December 12, 2013
    Inventors: Daniel J. McCulloch, Arnulfo Zepeda Navratil, Jonathan T. Steed, Ryan L. Hastings, Jason Scott, Brian J. Mount, Holly A. Hirzel, Darren Bennett, Michael J. Scavezze
  • Publication number: 20130293577
    Abstract: A see-through, near-eye, mixed reality display apparatus for providing translations of real world data for a user. A wearer's location and orientation with the apparatus is determined and input data for translation is selected using sensors of the apparatus. Input data can be audio or visual in nature, and selected by reference to the gaze of a wearer. The input data is translated for the user relative to user profile information bearing on accuracy of a translation and determining from the input data whether a linguistic translation, knowledge addition translation or context translation is useful.
    Type: Application
    Filed: May 4, 2012
    Publication date: November 7, 2013
    Inventors: Kathryn Stone Perez, John Clavin, Kevin A. Geisner, Stephen G. Latta, Brian J. Mount, Arthur C. Tomlin, Adam G. Poulos
  • Publication number: 20130293468
    Abstract: A see-through, near-eye, mixed reality display device and system for collaboration amongst various users of other such devices and personal audio/visual devices of more limited capabilities. One or more wearers of a see through head mounted display apparatus define a collaboration environment. For the collaboration environment, a selection of collaboration data and the scope of the environment are determined. Virtual representations of the collaboration data in the field of view of the wearer, and other device users are rendered. Persons in the wearer's field of view to be included in collaboration environment and who are entitled to share information in the collaboration environment are defined by the wearer. If allowed, input from other users in the collaboration environment on the virtual object may be received and allowed to manipulate a change in the virtual object.
    Type: Application
    Filed: May 4, 2012
    Publication date: November 7, 2013
    Inventors: Kathryn Stone Perez, John Clavin, Kevin A. Geisner, Stephen G. Latta, Brian J. Mount, Arthur C. Tomlin, Adam G. Poulos
  • Publication number: 20130293530
    Abstract: An augmented reality system that provides augmented product and environment information to a wearer of a see through head mounted display. The augmentation information may include advertising, inventory, pricing and other information about products a wearer may be interested in. Interest is determined from wearer actions and a wearer profile. The information may be used to incentivize purchases of real world products by a wearer, or allow the wearer to make better purchasing decisions. The augmentation information may enhance a wearer's shopping experience by allowing the wearer easy access to important product information while the wearer is shopping in a retail establishment. Through virtual rendering, a wearer may be provided with feedback on how an item would appear in a wearer environment, such as the wearer's home.
    Type: Application
    Filed: May 4, 2012
    Publication date: November 7, 2013
    Inventors: Kathryn Stone Perez, John Clavin, Kevin A. Geisner, Stephen G. Latta, Brian J. Mount, Arthur C. Tomlin, Adam G. Poulos