Patents by Inventor Adam Poulos

Adam Poulos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10705602
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes receiving a user input selecting an object in a field of view of the see-through display system, determining a first group of commands currently operable based on one or more of an identification of the selected object and a state of the object, and presenting the first group of commands to a user. The method may further include receiving a command from the first group of commands, changing the state of the selected object from a first state to a second state in response to the command, determining a second group of commands based on the second state, where the second group of commands is different than the first group of commands, and presenting the second group of commands to the user.
    Type: Grant
    Filed: September 25, 2017
    Date of Patent: July 7, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Adam Poulos, Cameron Brown, Daniel McCulloch, Jeff Cole
  • Patent number: 10497175
    Abstract: A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display.
    Type: Grant
    Filed: September 7, 2016
    Date of Patent: December 3, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Brian Mount, Stephen Latta, Adam Poulos, Daniel McCulloch, Darren Bennett, Ryan Hastings, Jason Scott
  • Patent number: 10126553
    Abstract: A head-mounted display device may display a holographic element with a portable control device. Image data of a physical environment including the control device may be received and used to generate a three dimensional model of at least a portion of the environment. Using position information of the control device, a holographic element is displayed with the control device. Using the position information, it is determined that the control device is within a predetermined proximity of either a holographic object or a physical object. Based on determining that the control device is within the predetermined proximity, the displayed holographic element is modified.
    Type: Grant
    Filed: June 16, 2016
    Date of Patent: November 13, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Adam Poulos, Lorenz Henric Jentz, Cameron Brown, Anthony Ambrus, Arthur Tomlin, James Dack, Jeffrey Kohler, Eric Scott Rehmeyer, Edward Daniel Parker, Nicolas Denhez, Benjamin Boesel
  • Publication number: 20180011534
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes receiving a user input selecting an object in a field of view of the see-through display system, determining a first group of commands currently operable based on one or more of an identification of the selected object and a state of the object, and presenting the first group of commands to a user. The method may further include receiving a command from the first group of commands, changing the state of the selected object from a first state to a second state in response to the command, determining a second group of commands based on the second state, where the second group of commands is different than the first group of commands, and presenting the second group of commands to the user.
    Type: Application
    Filed: September 25, 2017
    Publication date: January 11, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Adam Poulos, Cameron Brown, Daniel McCulloch, Jeff Cole
  • Publication number: 20170363867
    Abstract: A head-mounted display device may display a holographic element with a portable control device. Image data of a physical environment including the control device may be received and used to generate a three dimensional model of at least a portion of the environment. Using position information of the control device, a holographic element is displayed with the control device. Using the position information, it is determined that the control device is within a predetermined proximity of either a holographic object or a physical object. Based on determining that the control device is within the predetermined proximity, the displayed holographic element is modified.
    Type: Application
    Filed: June 16, 2016
    Publication date: December 21, 2017
    Inventors: Adam Poulos, Lorenz Henric Jentz, Cameron Brown, Anthony Ambrus, Arthur Tomlin, James Dack, Jeffrey Kohler, Eric Scott Rehmeyer, Edward Daniel Parker, Nicolas Denhez, Benjamin Boesel
  • Patent number: 9791921
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes receiving a user input selecting an object in a field of view of the see-through display system, determining a first group of commands currently operable based on one or more of an identification of the selected object and a state of the object, and presenting the first group of commands to a user. The method may further include receiving a command from the first group of commands, changing the state of the selected object from a first state to a second state in response to the command, determining a second group of commands based on the second state, where the second group of commands is different than the first group of commands, and presenting the second group of commands to the user.
    Type: Grant
    Filed: February 19, 2013
    Date of Patent: October 17, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Adam Poulos, Cameron Brown, Daniel McCulloch, Jeff Cole
  • Patent number: 9778814
    Abstract: Disclosed is a method, implemented in a visualization device, to assist a user in placing 3D objects. In certain embodiments the method includes displaying, on a display area of the visualization device, to a user, various virtual 3D objects overlaid on a real-world view of a 3D physical space. The method can further include a holding function, in which a first object, of the various virtual 3D objects, is displayed on the display area so that it appears to move through the 3D physical space in response to input from the user, which may be merely a change in the user's gaze direction. A second object is then identified as a target object for a snap function, based on the detected gaze of the user, the snap function being an operation that causes the first object to move to a location on a surface of the target object.
    Type: Grant
    Filed: January 30, 2015
    Date of Patent: October 3, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Anthony Ambrus, Marcus Ghaly, Adam Poulos, Michael Thomas, Jon Paulovich
  • Publication number: 20160379417
    Abstract: A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display.
    Type: Application
    Filed: September 7, 2016
    Publication date: December 29, 2016
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Brian Mount, Stephen Latta, Adam Poulos, Daniel McCulloch, Darren Bennett, Ryan Hastings, Jason Scott
  • Patent number: 9497501
    Abstract: A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display.
    Type: Grant
    Filed: December 6, 2011
    Date of Patent: November 15, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Brian Mount, Stephen Latta, Adam Poulos, Daniel McCulloch, Darren Bennett, Ryan Hastings, Jason Scott
  • Publication number: 20160179336
    Abstract: Disclosed is a method, implemented in a visualization device, to assist a user in placing 3D objects. In certain embodiments the method includes displaying, on a display area of the visualization device, to a user, various virtual 3D objects overlaid on a real-world view of a 3D physical space. The method can further include a holding function, in which a first object, of the various virtual 3D objects, is displayed on the display area so that it appears to move through the 3D physical space in response to input from the user, which may be merely a change in the user's gaze direction. A second object is then identified as a target object for a snap function, based on the detected gaze of the user, the snap function being an operation that causes the first object to move to a location on a surface of the target object.
    Type: Application
    Filed: January 30, 2015
    Publication date: June 23, 2016
    Inventors: Anthony Ambrus, Marcus Ghaly, Adam Poulos, Michael Thomas, Jon Paulovich
  • Patent number: 9239460
    Abstract: Embodiments are disclosed that relate to calibrating a predetermined eye location in a head-mounted display. For example, in one disclosed embodiment a method includes displaying a virtual marker visually alignable with a real world target at an alignment condition. At the alignment condition, image data is acquired to determine a location of the real world target. From the image data, an estimated eye location relative to a location of the head-mounted display is determined. Based upon the estimated eye location, the predetermined eye location is then calibrated.
    Type: Grant
    Filed: May 10, 2013
    Date of Patent: January 19, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Roger Sebastian Sylvan, Adam Poulos, Michael Scavezze, Stephen Latta, Arthur Tomlin, Brian Mount, Aaron Krauss
  • Publication number: 20150312561
    Abstract: A right near-eye display displays a right-eye virtual object, and a left near-eye display displays a left-eye virtual object. A first texture derived from a first image of a scene as viewed from a first perspective is overlaid on the right-eye virtual object and a second texture derived from a second image of the scene as viewed from a second perspective is overlaid on the left-eye virtual object. The right-eye virtual object and the left-eye virtual object cooperatively create an appearance of a pseudo 3D video perceivable by a user viewing the right and left near-eye displays.
    Type: Application
    Filed: June 12, 2015
    Publication date: October 29, 2015
    Inventors: Jonathan Ross Hoof, Soren Hannibal Nielsen, Brian Mount, Stephen Latta, Adam Poulos, Daniel McCulloch, Darren Bennett, Ryan Hastings, Jason Scott
  • Publication number: 20140333665
    Abstract: Embodiments are disclosed that relate to calibrating a predetermined eye location in a head-mounted display. For example, in one disclosed embodiment a method includes displaying a virtual marker visually alignable with a real world target at an alignment condition. At the alignment condition, image data is acquired to determine a location of the real world target. From the image data, an estimated eye location relative to a location of the head-mounted display is determined. Based upon the estimated eye location, the predetermined eye location is then calibrated.
    Type: Application
    Filed: May 10, 2013
    Publication date: November 13, 2014
    Inventors: Roger Sebastian Sylvan, Adam Poulos, Michael Scavezze, Stephen Latta, Arthur Tomlin, Brian Mount, Aaron Krauss
  • Publication number: 20140237366
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes receiving a user input selecting an object in a field of view of the see-through display system, determining a first group of commands currently operable based on one or more of an identification of the selected object and a state of the object, and presenting the first group of commands to a user. The method may further include receiving a command from the first group of commands, changing the state of the selected object from a first state to a second state in response to the command, determining a second group of commands based on the second state, where the second group of commands is different than the first group of commands, and presenting the second group of commands to the user.
    Type: Application
    Filed: February 19, 2013
    Publication date: August 21, 2014
    Inventors: Adam Poulos, Cameron Brown, Daniel McCulloch, Jeff Cole
  • Publication number: 20130311944
    Abstract: A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle.
    Type: Application
    Filed: July 29, 2013
    Publication date: November 21, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Andrew Mattingly, Jeremy Hill, Arjun Daval, Brian Kramp, Ali Vassigh, Christian Klein, Adam Poulos, Alex Kipman, Jeffrey Margolis
  • Patent number: 8499257
    Abstract: A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle.
    Type: Grant
    Filed: February 9, 2010
    Date of Patent: July 30, 2013
    Assignee: Microsoft Corporation
    Inventors: Andrew Mattingly, Jeremy Hill, Arjun Dayal, Brian Kramp, Ali Vassigh, Christian Klein, Adam Poulos, Alex Kipman, Jeffrey Margolis
  • Publication number: 20130141421
    Abstract: A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display.
    Type: Application
    Filed: December 6, 2011
    Publication date: June 6, 2013
    Inventors: Brian Mount, Stephen Latta, Adam Poulos, Daniel McCulloch, Darren Bennett, Ryan Hastings, Jason Scott
  • Patent number: 8457353
    Abstract: Gesture modifiers are provided for modifying and enhancing the control of a user-interface such as that provided by an operating system or application of a general computing system or multimedia console. Symbolic gesture movements are performed by a user in mid-air. A capture device generates depth images and a three-dimensional representation of a capture area including a human target. The human target is tracked using skeletal mapping to capture the mid-air motion of the user. Skeletal mapping data is used to identify movements corresponding to pre-defined gestures using gesture filters. Detection of a viable gesture can trigger one or more user-interface actions or controls. Gesture modifiers are provided to modify the user-interface action triggered by detection of a gesture and/or to aid in the identification of gestures.
    Type: Grant
    Filed: May 18, 2010
    Date of Patent: June 4, 2013
    Assignee: Microsoft Corporation
    Inventors: Brendan Reville, Ali Vassigh, Christian Klein, Adam Poulos, Jordan Andersen, Zack Fleischman
  • Publication number: 20110289456
    Abstract: Gesture modifiers are provided for modifying and enhancing the control of a user-interface such as that provided by an operating system or application of a general computing system or multimedia console. Symbolic gesture movements are performed by a user in mid-air. A capture device generates depth images and a three-dimensional representation of a capture area including a human target. The human target is tracked using skeletal mapping to capture the mid-air motion of the user. Skeletal mapping data is used to identify movements corresponding to pre-defined gestures using gesture filters. Detection of a viable gesture can trigger one or more user-interface actions or controls. Gesture modifiers are provided to modify the user-interface action triggered by detection of a gesture and/or to aid in the identification of gestures.
    Type: Application
    Filed: May 18, 2010
    Publication date: November 24, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Brendan Reville, Ali Vassigh, Christian Klein, Adam Poulos, Jordan Andersen, Zack Fleischman
  • Publication number: 20110289455
    Abstract: Symbolic gestures and associated recognition technology are provided for controlling a system user-interface, such as that provided by the operating system of a general computing system or multimedia console. The symbolic gesture movements in mid-air are performed by a user with or without the aid of an input device. A capture device is provided to generate depth images for three-dimensional representation of a capture area including a human target. The human target is tracked using skeletal mapping to capture the mid-air motion of the user. The skeletal mapping data is used to identify movements corresponding to pre-defined gestures using gesture filters that set forth parameters for determining when a target's movement indicates a viable gesture. When a gesture is detected, one or more pre-defined user-interface control actions are performed.
    Type: Application
    Filed: May 18, 2010
    Publication date: November 24, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Brendan Reville, Ali Vassigh, Arjun Dayal, Christian Klein, Adam Poulos, Andy Mattingly