Patents by Inventor Adam Poulos
Adam Poulos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10705602Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes receiving a user input selecting an object in a field of view of the see-through display system, determining a first group of commands currently operable based on one or more of an identification of the selected object and a state of the object, and presenting the first group of commands to a user. The method may further include receiving a command from the first group of commands, changing the state of the selected object from a first state to a second state in response to the command, determining a second group of commands based on the second state, where the second group of commands is different than the first group of commands, and presenting the second group of commands to the user.Type: GrantFiled: September 25, 2017Date of Patent: July 7, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Adam Poulos, Cameron Brown, Daniel McCulloch, Jeff Cole
-
Patent number: 10497175Abstract: A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display.Type: GrantFiled: September 7, 2016Date of Patent: December 3, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Brian Mount, Stephen Latta, Adam Poulos, Daniel McCulloch, Darren Bennett, Ryan Hastings, Jason Scott
-
Patent number: 10126553Abstract: A head-mounted display device may display a holographic element with a portable control device. Image data of a physical environment including the control device may be received and used to generate a three dimensional model of at least a portion of the environment. Using position information of the control device, a holographic element is displayed with the control device. Using the position information, it is determined that the control device is within a predetermined proximity of either a holographic object or a physical object. Based on determining that the control device is within the predetermined proximity, the displayed holographic element is modified.Type: GrantFiled: June 16, 2016Date of Patent: November 13, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Adam Poulos, Lorenz Henric Jentz, Cameron Brown, Anthony Ambrus, Arthur Tomlin, James Dack, Jeffrey Kohler, Eric Scott Rehmeyer, Edward Daniel Parker, Nicolas Denhez, Benjamin Boesel
-
Publication number: 20180011534Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes receiving a user input selecting an object in a field of view of the see-through display system, determining a first group of commands currently operable based on one or more of an identification of the selected object and a state of the object, and presenting the first group of commands to a user. The method may further include receiving a command from the first group of commands, changing the state of the selected object from a first state to a second state in response to the command, determining a second group of commands based on the second state, where the second group of commands is different than the first group of commands, and presenting the second group of commands to the user.Type: ApplicationFiled: September 25, 2017Publication date: January 11, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Adam Poulos, Cameron Brown, Daniel McCulloch, Jeff Cole
-
Publication number: 20170363867Abstract: A head-mounted display device may display a holographic element with a portable control device. Image data of a physical environment including the control device may be received and used to generate a three dimensional model of at least a portion of the environment. Using position information of the control device, a holographic element is displayed with the control device. Using the position information, it is determined that the control device is within a predetermined proximity of either a holographic object or a physical object. Based on determining that the control device is within the predetermined proximity, the displayed holographic element is modified.Type: ApplicationFiled: June 16, 2016Publication date: December 21, 2017Inventors: Adam Poulos, Lorenz Henric Jentz, Cameron Brown, Anthony Ambrus, Arthur Tomlin, James Dack, Jeffrey Kohler, Eric Scott Rehmeyer, Edward Daniel Parker, Nicolas Denhez, Benjamin Boesel
-
Patent number: 9791921Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes receiving a user input selecting an object in a field of view of the see-through display system, determining a first group of commands currently operable based on one or more of an identification of the selected object and a state of the object, and presenting the first group of commands to a user. The method may further include receiving a command from the first group of commands, changing the state of the selected object from a first state to a second state in response to the command, determining a second group of commands based on the second state, where the second group of commands is different than the first group of commands, and presenting the second group of commands to the user.Type: GrantFiled: February 19, 2013Date of Patent: October 17, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Adam Poulos, Cameron Brown, Daniel McCulloch, Jeff Cole
-
Patent number: 9778814Abstract: Disclosed is a method, implemented in a visualization device, to assist a user in placing 3D objects. In certain embodiments the method includes displaying, on a display area of the visualization device, to a user, various virtual 3D objects overlaid on a real-world view of a 3D physical space. The method can further include a holding function, in which a first object, of the various virtual 3D objects, is displayed on the display area so that it appears to move through the 3D physical space in response to input from the user, which may be merely a change in the user's gaze direction. A second object is then identified as a target object for a snap function, based on the detected gaze of the user, the snap function being an operation that causes the first object to move to a location on a surface of the target object.Type: GrantFiled: January 30, 2015Date of Patent: October 3, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Anthony Ambrus, Marcus Ghaly, Adam Poulos, Michael Thomas, Jon Paulovich
-
Publication number: 20160379417Abstract: A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display.Type: ApplicationFiled: September 7, 2016Publication date: December 29, 2016Applicant: Microsoft Technology Licensing, LLCInventors: Brian Mount, Stephen Latta, Adam Poulos, Daniel McCulloch, Darren Bennett, Ryan Hastings, Jason Scott
-
Patent number: 9497501Abstract: A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display.Type: GrantFiled: December 6, 2011Date of Patent: November 15, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Brian Mount, Stephen Latta, Adam Poulos, Daniel McCulloch, Darren Bennett, Ryan Hastings, Jason Scott
-
Publication number: 20160179336Abstract: Disclosed is a method, implemented in a visualization device, to assist a user in placing 3D objects. In certain embodiments the method includes displaying, on a display area of the visualization device, to a user, various virtual 3D objects overlaid on a real-world view of a 3D physical space. The method can further include a holding function, in which a first object, of the various virtual 3D objects, is displayed on the display area so that it appears to move through the 3D physical space in response to input from the user, which may be merely a change in the user's gaze direction. A second object is then identified as a target object for a snap function, based on the detected gaze of the user, the snap function being an operation that causes the first object to move to a location on a surface of the target object.Type: ApplicationFiled: January 30, 2015Publication date: June 23, 2016Inventors: Anthony Ambrus, Marcus Ghaly, Adam Poulos, Michael Thomas, Jon Paulovich
-
Patent number: 9239460Abstract: Embodiments are disclosed that relate to calibrating a predetermined eye location in a head-mounted display. For example, in one disclosed embodiment a method includes displaying a virtual marker visually alignable with a real world target at an alignment condition. At the alignment condition, image data is acquired to determine a location of the real world target. From the image data, an estimated eye location relative to a location of the head-mounted display is determined. Based upon the estimated eye location, the predetermined eye location is then calibrated.Type: GrantFiled: May 10, 2013Date of Patent: January 19, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Roger Sebastian Sylvan, Adam Poulos, Michael Scavezze, Stephen Latta, Arthur Tomlin, Brian Mount, Aaron Krauss
-
Publication number: 20150312561Abstract: A right near-eye display displays a right-eye virtual object, and a left near-eye display displays a left-eye virtual object. A first texture derived from a first image of a scene as viewed from a first perspective is overlaid on the right-eye virtual object and a second texture derived from a second image of the scene as viewed from a second perspective is overlaid on the left-eye virtual object. The right-eye virtual object and the left-eye virtual object cooperatively create an appearance of a pseudo 3D video perceivable by a user viewing the right and left near-eye displays.Type: ApplicationFiled: June 12, 2015Publication date: October 29, 2015Inventors: Jonathan Ross Hoof, Soren Hannibal Nielsen, Brian Mount, Stephen Latta, Adam Poulos, Daniel McCulloch, Darren Bennett, Ryan Hastings, Jason Scott
-
Publication number: 20140333665Abstract: Embodiments are disclosed that relate to calibrating a predetermined eye location in a head-mounted display. For example, in one disclosed embodiment a method includes displaying a virtual marker visually alignable with a real world target at an alignment condition. At the alignment condition, image data is acquired to determine a location of the real world target. From the image data, an estimated eye location relative to a location of the head-mounted display is determined. Based upon the estimated eye location, the predetermined eye location is then calibrated.Type: ApplicationFiled: May 10, 2013Publication date: November 13, 2014Inventors: Roger Sebastian Sylvan, Adam Poulos, Michael Scavezze, Stephen Latta, Arthur Tomlin, Brian Mount, Aaron Krauss
-
Publication number: 20140237366Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes receiving a user input selecting an object in a field of view of the see-through display system, determining a first group of commands currently operable based on one or more of an identification of the selected object and a state of the object, and presenting the first group of commands to a user. The method may further include receiving a command from the first group of commands, changing the state of the selected object from a first state to a second state in response to the command, determining a second group of commands based on the second state, where the second group of commands is different than the first group of commands, and presenting the second group of commands to the user.Type: ApplicationFiled: February 19, 2013Publication date: August 21, 2014Inventors: Adam Poulos, Cameron Brown, Daniel McCulloch, Jeff Cole
-
Publication number: 20130311944Abstract: A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle.Type: ApplicationFiled: July 29, 2013Publication date: November 21, 2013Applicant: MICROSOFT CORPORATIONInventors: Andrew Mattingly, Jeremy Hill, Arjun Daval, Brian Kramp, Ali Vassigh, Christian Klein, Adam Poulos, Alex Kipman, Jeffrey Margolis
-
Patent number: 8499257Abstract: A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle.Type: GrantFiled: February 9, 2010Date of Patent: July 30, 2013Assignee: Microsoft CorporationInventors: Andrew Mattingly, Jeremy Hill, Arjun Dayal, Brian Kramp, Ali Vassigh, Christian Klein, Adam Poulos, Alex Kipman, Jeffrey Margolis
-
Publication number: 20130141421Abstract: A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display.Type: ApplicationFiled: December 6, 2011Publication date: June 6, 2013Inventors: Brian Mount, Stephen Latta, Adam Poulos, Daniel McCulloch, Darren Bennett, Ryan Hastings, Jason Scott
-
Patent number: 8457353Abstract: Gesture modifiers are provided for modifying and enhancing the control of a user-interface such as that provided by an operating system or application of a general computing system or multimedia console. Symbolic gesture movements are performed by a user in mid-air. A capture device generates depth images and a three-dimensional representation of a capture area including a human target. The human target is tracked using skeletal mapping to capture the mid-air motion of the user. Skeletal mapping data is used to identify movements corresponding to pre-defined gestures using gesture filters. Detection of a viable gesture can trigger one or more user-interface actions or controls. Gesture modifiers are provided to modify the user-interface action triggered by detection of a gesture and/or to aid in the identification of gestures.Type: GrantFiled: May 18, 2010Date of Patent: June 4, 2013Assignee: Microsoft CorporationInventors: Brendan Reville, Ali Vassigh, Christian Klein, Adam Poulos, Jordan Andersen, Zack Fleischman
-
Publication number: 20110289456Abstract: Gesture modifiers are provided for modifying and enhancing the control of a user-interface such as that provided by an operating system or application of a general computing system or multimedia console. Symbolic gesture movements are performed by a user in mid-air. A capture device generates depth images and a three-dimensional representation of a capture area including a human target. The human target is tracked using skeletal mapping to capture the mid-air motion of the user. Skeletal mapping data is used to identify movements corresponding to pre-defined gestures using gesture filters. Detection of a viable gesture can trigger one or more user-interface actions or controls. Gesture modifiers are provided to modify the user-interface action triggered by detection of a gesture and/or to aid in the identification of gestures.Type: ApplicationFiled: May 18, 2010Publication date: November 24, 2011Applicant: MICROSOFT CORPORATIONInventors: Brendan Reville, Ali Vassigh, Christian Klein, Adam Poulos, Jordan Andersen, Zack Fleischman
-
Publication number: 20110289455Abstract: Symbolic gestures and associated recognition technology are provided for controlling a system user-interface, such as that provided by the operating system of a general computing system or multimedia console. The symbolic gesture movements in mid-air are performed by a user with or without the aid of an input device. A capture device is provided to generate depth images for three-dimensional representation of a capture area including a human target. The human target is tracked using skeletal mapping to capture the mid-air motion of the user. The skeletal mapping data is used to identify movements corresponding to pre-defined gestures using gesture filters that set forth parameters for determining when a target's movement indicates a viable gesture. When a gesture is detected, one or more pre-defined user-interface control actions are performed.Type: ApplicationFiled: May 18, 2010Publication date: November 24, 2011Applicant: MICROSOFT CORPORATIONInventors: Brendan Reville, Ali Vassigh, Arjun Dayal, Christian Klein, Adam Poulos, Andy Mattingly