Patents by Inventor Michael Scavezze
Michael Scavezze has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10674142Abstract: Sensor fusion is utilized in an electronic device such as a head mounted display (HMD) device that has a sensor package equipped with different sensors so that information that is supplemental to captured 2D images of objects or scenes in a real world environment may be utilized to determine an optimized transform of image stereo-pairs and to discard erroneous data that would otherwise prevent successful scans used for construction of a 3D model in, for example, virtual world applications. Such supplemental information can include one or more of world location, world rotation, image data from an extended field of view (FOV), or depth map data.Type: GrantFiled: April 9, 2019Date of Patent: June 2, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Michael Scavezze, Arthur Tomlin, Rui Cai, Zhiwei Li
-
Patent number: 10613642Abstract: Embodiments are disclosed herein that relate to tuning gesture recognition characteristics for a device configured to receive gesture-based user inputs. For example, one disclosed embodiment provides a head-mounted display device including a plurality of sensors, a display configured to present a user interface, a logic machine, and a storage machine that holds instructions executable by the logic machine to detect a gesture based upon information received from a first sensor of the plurality of sensors, perform an action in response to detecting the gesture, and determine whether the gesture matches an intended gesture input. The instructions are further executable to update a gesture parameter that defines the intended gesture input if it is determined that the gesture detected does not match the intended gesture input.Type: GrantFiled: March 12, 2014Date of Patent: April 7, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Michael Scavezze, Adam G. Poulos, John Bevis, Jeremy Lee, Daniel Joseph McCulloch, Nicholas Gervase Fajt
-
Patent number: 10510190Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.Type: GrantFiled: September 1, 2017Date of Patent: December 17, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger
-
Publication number: 20190379885Abstract: Sensor fusion is utilized in an electronic device such as a head mounted display (HMD) device that has a sensor package equipped with different sensors so that information that is supplemental to captured 2D images of objects or scenes in a real world environment may be utilized to determine an optimized transform of image stereo-pairs and to discard erroneous data that would otherwise prevent successful scans used for construction of a 3D model in, for example, virtual world applications. Such supplemental information can include one or more of world location, world rotation, image data from an extended field of view (FOV), or depth map data.Type: ApplicationFiled: April 9, 2019Publication date: December 12, 2019Inventors: Michael Scavezze, Arthur Tomlin, Rui Cai, Zhiwei Li
-
Patent number: 10482663Abstract: A method includes determining a current pose of an augmented reality device in a physical space, and visually presenting, via a display of the augmented reality device, an augmented-reality view of the physical space including a predetermined pose cue indicating a predetermined pose in the physical space and a current pose cue indicating the current pose in the physical space.Type: GrantFiled: October 27, 2016Date of Patent: November 19, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Marcus Ghaly, Andrew Jackson, Jeff Smith, Michael Scavezze, Ronald Amador-Leon, Cameron Brown, Charlene Jeune
-
Patent number: 10257505Abstract: Sensor fusion is utilized in an electronic device such as a head mounted display (HMD) device that has a sensor package equipped with different sensors so that information that is supplemental to captured 2D images of objects or scenes in a real world environment may be utilized to determine an optimized transform of image stereo-pairs and to discard erroneous data that would otherwise prevent successful scans used for construction of a 3D model in, for example, virtual world applications. Such supplemental information can include one or more of world location, world rotation, image data from an extended field of view (FOV), or depth map data.Type: GrantFiled: February 8, 2016Date of Patent: April 9, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Michael Scavezze, Arthur Tomlin, Rui Cai, Zhiwei Li
-
Publication number: 20190102953Abstract: Examples disclosed relate to displaying virtual objects. One example provides, on a display device comprising a camera and a display, a method comprising acquiring, via the camera, image data imaging an environment, receiving a user input requesting display of a three-dimensional virtual object, comparing dimensional information for the three-dimensional virtual object to dimensional information for a field of view of the display device, modifying the three-dimensional virtual object based upon comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view to obtain a modified three-dimensional virtual object, and displaying the modified three-dimensional virtual object via the display.Type: ApplicationFiled: November 16, 2018Publication date: April 4, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Megan Ann Lindsay, Michael Scavezze, Aaron Daniel Krauss, Michael Thomas, Richard Wifall, Jeffrey David Smith, Cameron Brown, Charlene Jeune, Cheyne Rory Quin Mathey-Owens
-
Patent number: 10176641Abstract: Examples disclosed relate to displaying virtual objects. One example provides, on a display device comprising a camera and a display, a method comprising acquiring, via the camera, image data imaging an environment, receiving a user input requesting display of a three-dimensional virtual object, comparing dimensional information for the three-dimensional virtual object to dimensional information for a field of view of the display device, modifying the three-dimensional virtual object based upon comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view to obtain a modified three-dimensional virtual object, and displaying the modified three-dimensional virtual object via the display.Type: GrantFiled: October 20, 2016Date of Patent: January 8, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Megan Ann Lindsay, Michael Scavezze, Aaron Daniel Krauss, Michael Thomas, Richard Wifall, Jeffrey David Smith, Cameron Brown, Charlene Jeune, Cheyne Rory Quin Mathey-Owens
-
Patent number: 10007352Abstract: Disclosed are techniques for performing undo operations on holographic objects in an immersive 3D visual environment. A display system allows the user to undo a given user operation performed on a particular selected holographic object without affecting any other holographic objects, based on a user's gaze and/or other user input. The technique can be implemented in conjunction with a scrollable visual “timeline” in which multiple past states of the display environment are displayed to the user and are selectable by the user as the target state of the revert operation. Also disclosed is a technique for partially undoing a single continuous user action in a holographic display system.Type: GrantFiled: August 21, 2015Date of Patent: June 26, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Adam Gabriel Poulos, Johanna Dy Lynn, Michael Scavezze, Daniel Joseph McCulloch
-
Publication number: 20180012412Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.Type: ApplicationFiled: September 1, 2017Publication date: January 11, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger
-
Patent number: 9804753Abstract: Various embodiments relating to selection of a user interface object displayed on a graphical user interface based on eye gaze are disclosed. In one embodiment, a selection input may be received. A plurality of eye gaze samples at different times within a time window may be evaluated. The time window may be selected based on a time at which the selection input is detected. A user interface object may be selected based on the plurality of eye gaze samples.Type: GrantFiled: March 20, 2014Date of Patent: October 31, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Scott Ramsby, Tony Ambrus, Michael Scavezze, Abby Lin Lee, Brian Mount, Ian Douglas McIntyre, Aaron Mackay Burns, Russ McMackin, Katelyn Elizabeth Doran, Gerhard Schneider, Quentin Simon Charles Miller
-
Publication number: 20170287221Abstract: A method includes determining a current pose of an augmented reality device in a physical space, and visually presenting, via a display of the augmented reality device, an augmented-reality view of the physical space including a predetermined pose cue indicating a predetermined pose in the physical space and a current pose cue indicating the current pose in the physical space.Type: ApplicationFiled: October 27, 2016Publication date: October 5, 2017Inventors: Marcus Ghaly, Andrew Jackson, Jeff Smith, Michael Scavezze, Ronald Amador-Leon, Cameron Brown, Charlene Jeune
-
Publication number: 20170270715Abstract: Examples disclosed relate to displaying virtual objects. One example provides, on a display device comprising a camera and a display, a method comprising acquiring, via the camera, image data imaging an environment, receiving a user input requesting display of a three-dimensional virtual object, comparing dimensional information for the three-dimensional virtual object to dimensional information for a field of view of the display device, modifying the three-dimensional virtual object based upon comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view to obtain a modified three-dimensional virtual object, and displaying the modified three-dimensional virtual object via the display.Type: ApplicationFiled: October 20, 2016Publication date: September 21, 2017Inventors: Megan Ann Lindsay, Michael Scavezze, Aaron Daniel Krauss, Michael Thomas, Richard Wifall, Jeffrey David Smith, Cameron Brown, Charlene Jeune, Cheyne Rory Quin Mathey-Owens
-
Patent number: 9761057Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.Type: GrantFiled: November 21, 2016Date of Patent: September 12, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Thomas George Salter, Ben Sugden, Daniel Deptford, Robert Crocco, Jr., Brian Keane, Laura Massey, Alex Kipman, Peter Tobias Kinnebrew, Nicholas Kamuda, Zachary Quarles, Michael Scavezze, Ryan Hastings, Cameron Brown, Tony Ambrus, Jason Scott, John Bevis, Jamie B. Kirschenbaum, Nicholas Gervase Fajt, Michael Klucher, Relja Markovic, Stephen Latta, Daniel McCulloch
-
Patent number: 9754420Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.Type: GrantFiled: September 12, 2016Date of Patent: September 5, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger
-
Publication number: 20170230641Abstract: Sensor fusion is utilized in an electronic device such as a head mounted display (HMD) device that has a sensor package equipped with different sensors so that information that is supplemental to captured 2D images of objects or scenes in a real world environment may be utilized to determine an optimized transform of image stereo-pairs and to discard erroneous data that would otherwise prevent successful scans used for construction of a 3D model in, for example, virtual world applications. Such supplemental information can include one or more of world location, world rotation, image data from an extended field of view (FOV), or depth map data.Type: ApplicationFiled: February 8, 2016Publication date: August 10, 2017Inventors: Michael Scavezze, Arthur Tomlin, Rui Cai, Zhiwei Li
-
Patent number: 9652892Abstract: Various embodiments relating to creating a virtual shadow of an object in an image displayed with a see-through display are provided. In one embodiment, an image of a virtual object may be displayed with the see-through display. The virtual object may appear in front of a real-world background when viewed through the see-through display. A relative brightness of the real-world background around a virtual shadow of the virtual object may be increased when viewed through the see-through display. The virtual shadow may appear to result from a spotlight that is fixed relative to a vantage point of the see-through display.Type: GrantFiled: October 29, 2013Date of Patent: May 16, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Arthur Tomlin, Tony Ambrus, Ron Amador-Leon, Nicholas Gervase Fajt, Ryan Hastings, Matthew G. Kaplan, Michael Scavezze, Daniel McCulloch
-
Publication number: 20170069143Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.Type: ApplicationFiled: November 21, 2016Publication date: March 9, 2017Applicant: Microsoft Technology Licensing, LLCInventors: Thomas George Salter, Ben Sugden, Daniel Deptford, Robert Crocco, JR., Brian Keane, Laura Massey, Alex Kipman, Peter Tobias Kinnebrew, Nicholas Kamuda, Zachary Quarles, Michael Scavezze, Ryan Hastings, Cameron Brown, Tony Ambrus, Jason Scott, John Bevis, Jamie B. Kirschenbaum, Nicholas Gervase Fajt, Michael Klucher, Relja Markovic, Stephen Latta, Daniel McCulloch
-
Publication number: 20170052595Abstract: Disclosed are techniques for performing undo operations on holographic objects in an immersive 3D visual environment. A display system allows the user to undo a given user operation performed on a particular selected holographic object without affecting any other holographic objects, based on a user's gaze and/or other user input. The technique can be implemented in conjunction with a scrollable visual “timeline” in which multiple past states of the display environment are displayed to the user and are selectable by the user as the target state of the revert operation. Also disclosed is a technique for partially undoing a single continuous user action in a holographic display system.Type: ApplicationFiled: August 21, 2015Publication date: February 23, 2017Inventors: Adam Gabriel Poulos, Johanna Dy Lynn, Michael Scavezze, Daniel Joseph McCulloch
-
Publication number: 20170004655Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.Type: ApplicationFiled: September 12, 2016Publication date: January 5, 2017Applicant: Microsoft Technology Licensing, LLCInventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger