Patents by Inventor Adam G. Poulos

Adam G. Poulos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9030408
    Abstract: Methods for recognizing gestures using adaptive multi-sensor gesture recognition are described. In some embodiments, a gesture recognition system receives a plurality of sensor inputs from a plurality of sensor devices and a plurality of confidence thresholds associated with the plurality of sensor inputs. A confidence threshold specifies a minimum confidence value for which it is deemed that a particular gesture has occurred. Upon detection of a compensating event, such as excessive motion involving one of the plurality of sensor devices, the gesture recognition system may modify the plurality of confidence thresholds based on the compensating event. Subsequently, the gesture recognition system generates a multi-sensor confidence value based on whether at least a subset of the plurality of confidence thresholds has been satisfied. The gesture recognition system may also modify the plurality of confidence thresholds based on the plugging and unplugging of sensor inputs from the gesture recognition system.
    Type: Grant
    Filed: November 29, 2012
    Date of Patent: May 12, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephen G. Latta, Brian J. Mount, Adam G. Poulos, Jeffrey A. Kohler, Arthur C. Tomlin, Jonathan T. Steed
  • Publication number: 20150007114
    Abstract: Technology is described for web-like hierarchical menu interface which displays a menu in a web-like hierarchical menu display configuration in a near-eye display (NED). The web-like hierarchical menu display configuration links menu levels and menu items within a menu level with flexible spatial dimensions for menu elements. One or more processors executing the interface select a web-like hierarchical menu display configuration based on the available menu space and user head view direction determined from a 3D mapping of the NED field of view data and stored user head comfort rules. Activation parameters in menu item selection criteria are adjusted to be user specific based on user head motion data tracked based on data from one or more sensors when the user wears the NED. Menu display layout may be triggered by changes in head view direction of the user and available menu space about the user's head.
    Type: Application
    Filed: June 28, 2013
    Publication date: January 1, 2015
    Inventors: Adam G. Poulos, Anthony J. Ambrus, Cameron G. Brown, Jason Scott, Brian J. Mount, Daniel J. McCulloch, John Bevis, Wei Zhang
  • Publication number: 20150002507
    Abstract: Technology is described for (3D) space carving of a user environment based on movement through the user environment of one or more users wearing a near-eye display (NED) system. One or more sensors on the near-eye display (NED) system provide sensor data from which a distance and direction of movement can be determined. Spatial dimensions for a navigable path can be represented based on user height data and user width data of the one or more users who have traversed the path. Space carving data identifying carved out space can be stored in a 3D space carving model of the user environment. The navigable paths can also be related to position data in another kind of 3D mapping like a 3D surface reconstruction mesh model of the user environment generated from depth images.
    Type: Application
    Filed: June 28, 2013
    Publication date: January 1, 2015
    Inventors: Anthony J. Ambrus, Jea Gon Park, Adam G. Poulos, Justin Avram Clark, Michael Jason Gourlay, Brian J. Mount, Daniel J. McCulloch, Arthur C. Tomlin
  • Publication number: 20140347390
    Abstract: Embodiments are disclosed that relate to placing virtual objects in an augmented reality environment. For example, one disclosed embodiment provides a method comprising receiving sensor data comprising one or more of motion data, location data, and orientation data from one or more sensors located on a head-mounted display device, and based upon the motion data, determining a body-locking direction vector that is based upon an estimated direction in which a body of a user is facing. The method further comprises positioning a displayed virtual object based on the body-locking direction vector.
    Type: Application
    Filed: May 22, 2013
    Publication date: November 27, 2014
    Inventors: Adam G. Poulos, Tony Ambrus, Jeffrey Cole, Ian Douglas McIntyre, Stephen Latta, Peter Tobias Kinnebrew, Nicholas Kamuda, Robert Pengelly, Jeffrey C. Fong, Aaron Woo, Udiyan I. Padmanahan, Andrew Wyman MacDonald, Olivia M. Janik
  • Publication number: 20140333666
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a display system. For example, one disclosed embodiment includes displaying a virtual object via the display system as free-floating, detecting a trigger to display the object as attached to a surface, and, in response to the trigger, displaying the virtual object as attached to the surface via the display system. The method may further include detecting a trigger to detach the virtual object from the surface and, in response to the trigger to detach the virtual object from the surface, detaching the virtual object from the surface and displaying the virtual object as free-floating.
    Type: Application
    Filed: May 13, 2013
    Publication date: November 13, 2014
    Inventors: Adam G. Poulos, Evan Michael Keibler, Arthur Tomlin, Cameron Brown, Daniel McCulloch, Brian Mount, Dan Kroymann, Gregory Lowell Alt
  • Publication number: 20140306993
    Abstract: Methods for positioning virtual objects within an augmented reality environment using snap grid spaces associated with real-world environments, real-world objects, and/or virtual objects within the augmented reality environment are described. A snap grid space may comprise a two-dimensional or three-dimensional virtual space within an augmented reality environment in which one or more virtual objects may be positioned. In some embodiments, a head-mounted display device (HMD) may identify one or more grid spaces within an augmented reality environment, detect a positioning of a virtual object within the augmented reality environment, determine a target grid space of the one or more grid spaces in which to position the virtual object, determine a position of the virtual object within the target grid space, and display the virtual object within the augmented reality environment based on the position of the virtual object within the target grid space.
    Type: Application
    Filed: April 12, 2013
    Publication date: October 16, 2014
    Inventors: Adam G. Poulos, Jason Scott, Matthew Kaplan, Christopher Obeso, Cameron G. Brown, Daniel J. McCulloch, Abby Lee, Brian J. Mount, Ben J. Sugden
  • Publication number: 20140306891
    Abstract: Methods for providing real-time feedback to an end user of a mobile device as they are interacting with or manipulating one or more virtual objects within an augmented reality environment are described. The real-time feedback may comprise visual feedback, audio feedback, and/or haptic feedback. In some embodiments, a mobile device, such as a head-mounted display device (HMD), may determine an object classification associated with a virtual object within an augmented reality environment, detect an object manipulation gesture performed by an end user of the mobile device, detect an interaction with the virtual object based on the object manipulation gesture, determine a magnitude of a virtual force associated with the interaction, and provide real-time feedback to the end user of the mobile device based on the interaction, the magnitude of the virtual force applied to the virtual object, and the object classification associated with the virtual object.
    Type: Application
    Filed: April 12, 2013
    Publication date: October 16, 2014
    Inventors: Stephen G. Latta, Adam G. Poulos, Cameron G. Brown, Daniel J. McCulloch, Matthew Kaplan, Arnulfo Zepeda Navratil, Jon Paulovich, Kudo Tsunoda
  • Publication number: 20140160157
    Abstract: Methods for generating and displaying people-triggered holographic reminders are described. In some embodiments, a head-mounted display device (HMD) generates and displays an augmented reality environment to an end user of the HMD in which reminders associated with a particular person may be displayed if the particular person is within a field of view of the HMD or if the particular person is within a particular distance of the HMD. The particular person may be identified individually or identified as belonging to a particular group (e.g., a member of a group with a particular job title such as programmer or administrator). In some cases, a completion of a reminder may be automatically detected by applying speech recognition techniques (e.g., to identify key words, phrases, or names) to captured audio of a conversation occurring between the end user and the particular person.
    Type: Application
    Filed: December 11, 2012
    Publication date: June 12, 2014
    Inventors: Adam G. Poulos, Holly A. Hirzel, Anthony J. Ambrus, Daniel J. McCulloch, Brian J. Mount, Jonathan T. Steed
  • Patent number: 8726042
    Abstract: Various mechanisms are disclosed for protecting the security of memory in a computing environment. A security layer can have an encryption layer and a hashing layer that can dynamically encrypt and then dynamically hash sensitive information, as it is being loaded to dynamic memory of a computing device. For example, a memory unit that can correspond to a memory page can be processed by the security layer, and header data, code, and protect-worthy data can be secured, while other non-sensitive data can be left alone. Once such information is secured and stored in dynamic memory, it can be accessed at a later time by a processor and unencrypted and hash checked. Then, it can be loaded back onto the dynamic memory, thereby preventing direct memory access attacks.
    Type: Grant
    Filed: February 29, 2008
    Date of Patent: May 13, 2014
    Assignee: Microsoft Corporation
    Inventors: Sebastian Lange, Dinarte R. Morais, Victor Tan, Adam G. Poulos
  • Publication number: 20130342572
    Abstract: A system and method are disclosed for controlling content displayed to a user in a virtual environment. The virtual environment may include virtual controls with which a user may interact using predefined gestures. Interacting with a virtual control may adjust an aspect of the displayed content, including for example one or more of fast forwarding of the content, rewinding of the content, pausing of the content, stopping the content, changing a volume of content, recording the content, changing a brightness of the content, changing a contrast of the content and changing the content from a first still image to a second still image.
    Type: Application
    Filed: June 26, 2012
    Publication date: December 26, 2013
    Inventors: Adam G. Poulos, Stephen G. Latta, Daniel J. McCulloch, Jeffrey Cole
  • Publication number: 20130328925
    Abstract: A system and method are disclosed for interpreting user focus on virtual objects in a mixed reality environment. Using inference, express gestures and heuristic rules, the present system determines which of the virtual objects the user is likely focused on and interacting with. At that point, the present system may emphasize the selected virtual object over other virtual objects, and interact with the selected virtual object in a variety of ways.
    Type: Application
    Filed: June 12, 2012
    Publication date: December 12, 2013
    Inventors: Stephen G. Latta, Adam G. Poulos, Daniel J. McCulloch, Jeffrey Cole, Wei Zhang
  • Publication number: 20130328763
    Abstract: Methods for recognizing gestures using adaptive multi-sensor gesture recognition are described. In some embodiments, a gesture recognition system receives a plurality of sensor inputs from a plurality of sensor devices and a plurality of confidence thresholds associated with the plurality of sensor inputs. A confidence threshold specifies a minimum confidence value for which it is deemed that a particular gesture has occurred. Upon detection of a compensating event, such as excessive motion involving one of the plurality of sensor devices, the gesture recognition system may modify the plurality of confidence thresholds based on the compensating event. Subsequently, the gesture recognition system generates a multi-sensor confidence value based on whether at least a subset of the plurality of confidence thresholds has been satisfied. The gesture recognition system may also modify the plurality of confidence thresholds based on the plugging and unplugging of sensor inputs from the gesture recognition system.
    Type: Application
    Filed: November 29, 2012
    Publication date: December 12, 2013
    Inventors: Stephen G. Latta, Brian J. Mount, Adam G. Poulos, Jeffrey A. Kohler, Arthur C. Tomlin, Jonathan T. Steed
  • Publication number: 20130326364
    Abstract: A system and method are disclosed for positioning and sizing virtual objects in a mixed reality environment in a way that is optimal and most comfortable for a user to interact with the virtual objects.
    Type: Application
    Filed: May 31, 2012
    Publication date: December 5, 2013
    Inventors: Stephen G. Latta, Adam G. Poulos, Daniel J. McCulloch, Wei Zhang
  • Publication number: 20130293468
    Abstract: A see-through, near-eye, mixed reality display device and system for collaboration amongst various users of other such devices and personal audio/visual devices of more limited capabilities. One or more wearers of a see through head mounted display apparatus define a collaboration environment. For the collaboration environment, a selection of collaboration data and the scope of the environment are determined. Virtual representations of the collaboration data in the field of view of the wearer, and other device users are rendered. Persons in the wearer's field of view to be included in collaboration environment and who are entitled to share information in the collaboration environment are defined by the wearer. If allowed, input from other users in the collaboration environment on the virtual object may be received and allowed to manipulate a change in the virtual object.
    Type: Application
    Filed: May 4, 2012
    Publication date: November 7, 2013
    Inventors: Kathryn Stone Perez, John Clavin, Kevin A. Geisner, Stephen G. Latta, Brian J. Mount, Arthur C. Tomlin, Adam G. Poulos
  • Publication number: 20130293577
    Abstract: A see-through, near-eye, mixed reality display apparatus for providing translations of real world data for a user. A wearer's location and orientation with the apparatus is determined and input data for translation is selected using sensors of the apparatus. Input data can be audio or visual in nature, and selected by reference to the gaze of a wearer. The input data is translated for the user relative to user profile information bearing on accuracy of a translation and determining from the input data whether a linguistic translation, knowledge addition translation or context translation is useful.
    Type: Application
    Filed: May 4, 2012
    Publication date: November 7, 2013
    Inventors: Kathryn Stone Perez, John Clavin, Kevin A. Geisner, Stephen G. Latta, Brian J. Mount, Arthur C. Tomlin, Adam G. Poulos
  • Publication number: 20130293530
    Abstract: An augmented reality system that provides augmented product and environment information to a wearer of a see through head mounted display. The augmentation information may include advertising, inventory, pricing and other information about products a wearer may be interested in. Interest is determined from wearer actions and a wearer profile. The information may be used to incentivize purchases of real world products by a wearer, or allow the wearer to make better purchasing decisions. The augmentation information may enhance a wearer's shopping experience by allowing the wearer easy access to important product information while the wearer is shopping in a retail establishment. Through virtual rendering, a wearer may be provided with feedback on how an item would appear in a wearer environment, such as the wearer's home.
    Type: Application
    Filed: May 4, 2012
    Publication date: November 7, 2013
    Inventors: Kathryn Stone Perez, John Clavin, Kevin A. Geisner, Stephen G. Latta, Brian J. Mount, Arthur C. Tomlin, Adam G. Poulos
  • Publication number: 20130147686
    Abstract: An audio and/or visual experience of a see-through head-mounted display (HMD) device, e.g., in the form of glasses, can be moved to target computing device such as a television, cell phone, or computer monitor to allow the user to seamlessly transition the content to the target computing device. For example, when the user enters a room in the home with a television, a movie which is playing on the HMD device can be transferred to the television and begin playing there without substantially interrupting the flow of the movie. The HMD device can inform the television of a network address for accessing the movie, for instance, and provide a current status in the form of a time stamp or packet identifier. Content can also be transferred in the reverse direction, to the HMD device. A transfer can occur based on location, preconfigured settings and user commands.
    Type: Application
    Filed: December 12, 2011
    Publication date: June 13, 2013
    Inventors: John Clavin, Ben Sugden, Stephen G. Latta, Benjamin I. Vaught, Michael Scavezze, Jonathan T. Steed, Ryan Hastings, Adam G. Poulos
  • Publication number: 20130083008
    Abstract: A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction.
    Type: Application
    Filed: March 27, 2012
    Publication date: April 4, 2013
    Inventors: Kevin A. Geisner, Stephen G. Latta, Ben J. Sugden, Benjamin I. Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez, Ryan L. Hastings, Darren Bennett, Daniel J. McCulloch, John Clavin, Jennifer A. Karr, Adam G. Poulos, Brian J. Mount
  • Publication number: 20090222675
    Abstract: Various mechanisms are disclosed for protecting the security of memory in a computing environment. A security layer can have an encryption layer and a hashing layer that can dynamically encrypt and then dynamically hash sensitive information, as it is being loaded to dynamic memory of a computing device. For example, a memory unit that can correspond to a memory page can be processed by the security layer, and header data, code, and protect-worthy data can be secured, while other non-sensitive data can be left alone. Once such information is secured and stored in dynamic memory, it can be accessed at a later time by a processor and unencrypted and hash checked. Then, it can be loaded back onto the dynamic memory, thereby preventing direct memory access attacks.
    Type: Application
    Filed: February 29, 2008
    Publication date: September 3, 2009
    Applicant: Microsoft Corporation
    Inventors: Sebastian Lange, Dinarte R. Morais, Victor Tan, Adam G. Poulos
  • Publication number: 20090199279
    Abstract: Techniques for migrating content from a first set of conditions to a second set of conditions are disclosed herein. In particular, a content migration certificate is utilized to enable content migration and set forth under what conditions content may be accessed after migration. The content migration certificate may, for example, be stored as a file in a removable storage unit or transferred online once an indication that conditions have changed is received. The change in conditions may involve a new device attempting to access the content file, a new user attempting to access the content, or any other similar conditions. Access to the information in the content migration certificate may be protected by encryption so that only devices and/or users meeting the conditions of the certificate are permitted to transfer content. By accessing the content migration certificate in the prescribed manner, migration of content is enabled in a controlled and easy process.
    Type: Application
    Filed: January 31, 2008
    Publication date: August 6, 2009
    Applicant: Microsoft Corporation
    Inventors: Sebastian Lange, Victor Tan, Adam G. Poulos