Patents by Inventor Michael Scavezze

Michael Scavezze has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10674142
    Abstract: Sensor fusion is utilized in an electronic device such as a head mounted display (HMD) device that has a sensor package equipped with different sensors so that information that is supplemental to captured 2D images of objects or scenes in a real world environment may be utilized to determine an optimized transform of image stereo-pairs and to discard erroneous data that would otherwise prevent successful scans used for construction of a 3D model in, for example, virtual world applications. Such supplemental information can include one or more of world location, world rotation, image data from an extended field of view (FOV), or depth map data.
    Type: Grant
    Filed: April 9, 2019
    Date of Patent: June 2, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael Scavezze, Arthur Tomlin, Rui Cai, Zhiwei Li
  • Patent number: 10613642
    Abstract: Embodiments are disclosed herein that relate to tuning gesture recognition characteristics for a device configured to receive gesture-based user inputs. For example, one disclosed embodiment provides a head-mounted display device including a plurality of sensors, a display configured to present a user interface, a logic machine, and a storage machine that holds instructions executable by the logic machine to detect a gesture based upon information received from a first sensor of the plurality of sensors, perform an action in response to detecting the gesture, and determine whether the gesture matches an intended gesture input. The instructions are further executable to update a gesture parameter that defines the intended gesture input if it is determined that the gesture detected does not match the intended gesture input.
    Type: Grant
    Filed: March 12, 2014
    Date of Patent: April 7, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael Scavezze, Adam G. Poulos, John Bevis, Jeremy Lee, Daniel Joseph McCulloch, Nicholas Gervase Fajt
  • Patent number: 10510190
    Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: December 17, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger
  • Publication number: 20190379885
    Abstract: Sensor fusion is utilized in an electronic device such as a head mounted display (HMD) device that has a sensor package equipped with different sensors so that information that is supplemental to captured 2D images of objects or scenes in a real world environment may be utilized to determine an optimized transform of image stereo-pairs and to discard erroneous data that would otherwise prevent successful scans used for construction of a 3D model in, for example, virtual world applications. Such supplemental information can include one or more of world location, world rotation, image data from an extended field of view (FOV), or depth map data.
    Type: Application
    Filed: April 9, 2019
    Publication date: December 12, 2019
    Inventors: Michael Scavezze, Arthur Tomlin, Rui Cai, Zhiwei Li
  • Patent number: 10482663
    Abstract: A method includes determining a current pose of an augmented reality device in a physical space, and visually presenting, via a display of the augmented reality device, an augmented-reality view of the physical space including a predetermined pose cue indicating a predetermined pose in the physical space and a current pose cue indicating the current pose in the physical space.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: November 19, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Marcus Ghaly, Andrew Jackson, Jeff Smith, Michael Scavezze, Ronald Amador-Leon, Cameron Brown, Charlene Jeune
  • Patent number: 10257505
    Abstract: Sensor fusion is utilized in an electronic device such as a head mounted display (HMD) device that has a sensor package equipped with different sensors so that information that is supplemental to captured 2D images of objects or scenes in a real world environment may be utilized to determine an optimized transform of image stereo-pairs and to discard erroneous data that would otherwise prevent successful scans used for construction of a 3D model in, for example, virtual world applications. Such supplemental information can include one or more of world location, world rotation, image data from an extended field of view (FOV), or depth map data.
    Type: Grant
    Filed: February 8, 2016
    Date of Patent: April 9, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael Scavezze, Arthur Tomlin, Rui Cai, Zhiwei Li
  • Publication number: 20190102953
    Abstract: Examples disclosed relate to displaying virtual objects. One example provides, on a display device comprising a camera and a display, a method comprising acquiring, via the camera, image data imaging an environment, receiving a user input requesting display of a three-dimensional virtual object, comparing dimensional information for the three-dimensional virtual object to dimensional information for a field of view of the display device, modifying the three-dimensional virtual object based upon comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view to obtain a modified three-dimensional virtual object, and displaying the modified three-dimensional virtual object via the display.
    Type: Application
    Filed: November 16, 2018
    Publication date: April 4, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Megan Ann Lindsay, Michael Scavezze, Aaron Daniel Krauss, Michael Thomas, Richard Wifall, Jeffrey David Smith, Cameron Brown, Charlene Jeune, Cheyne Rory Quin Mathey-Owens
  • Patent number: 10176641
    Abstract: Examples disclosed relate to displaying virtual objects. One example provides, on a display device comprising a camera and a display, a method comprising acquiring, via the camera, image data imaging an environment, receiving a user input requesting display of a three-dimensional virtual object, comparing dimensional information for the three-dimensional virtual object to dimensional information for a field of view of the display device, modifying the three-dimensional virtual object based upon comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view to obtain a modified three-dimensional virtual object, and displaying the modified three-dimensional virtual object via the display.
    Type: Grant
    Filed: October 20, 2016
    Date of Patent: January 8, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Megan Ann Lindsay, Michael Scavezze, Aaron Daniel Krauss, Michael Thomas, Richard Wifall, Jeffrey David Smith, Cameron Brown, Charlene Jeune, Cheyne Rory Quin Mathey-Owens
  • Patent number: 10007352
    Abstract: Disclosed are techniques for performing undo operations on holographic objects in an immersive 3D visual environment. A display system allows the user to undo a given user operation performed on a particular selected holographic object without affecting any other holographic objects, based on a user's gaze and/or other user input. The technique can be implemented in conjunction with a scrollable visual “timeline” in which multiple past states of the display environment are displayed to the user and are selectable by the user as the target state of the revert operation. Also disclosed is a technique for partially undoing a single continuous user action in a holographic display system.
    Type: Grant
    Filed: August 21, 2015
    Date of Patent: June 26, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adam Gabriel Poulos, Johanna Dy Lynn, Michael Scavezze, Daniel Joseph McCulloch
  • Publication number: 20180012412
    Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.
    Type: Application
    Filed: September 1, 2017
    Publication date: January 11, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger
  • Patent number: 9804753
    Abstract: Various embodiments relating to selection of a user interface object displayed on a graphical user interface based on eye gaze are disclosed. In one embodiment, a selection input may be received. A plurality of eye gaze samples at different times within a time window may be evaluated. The time window may be selected based on a time at which the selection input is detected. A user interface object may be selected based on the plurality of eye gaze samples.
    Type: Grant
    Filed: March 20, 2014
    Date of Patent: October 31, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Scott Ramsby, Tony Ambrus, Michael Scavezze, Abby Lin Lee, Brian Mount, Ian Douglas McIntyre, Aaron Mackay Burns, Russ McMackin, Katelyn Elizabeth Doran, Gerhard Schneider, Quentin Simon Charles Miller
  • Publication number: 20170287221
    Abstract: A method includes determining a current pose of an augmented reality device in a physical space, and visually presenting, via a display of the augmented reality device, an augmented-reality view of the physical space including a predetermined pose cue indicating a predetermined pose in the physical space and a current pose cue indicating the current pose in the physical space.
    Type: Application
    Filed: October 27, 2016
    Publication date: October 5, 2017
    Inventors: Marcus Ghaly, Andrew Jackson, Jeff Smith, Michael Scavezze, Ronald Amador-Leon, Cameron Brown, Charlene Jeune
  • Publication number: 20170270715
    Abstract: Examples disclosed relate to displaying virtual objects. One example provides, on a display device comprising a camera and a display, a method comprising acquiring, via the camera, image data imaging an environment, receiving a user input requesting display of a three-dimensional virtual object, comparing dimensional information for the three-dimensional virtual object to dimensional information for a field of view of the display device, modifying the three-dimensional virtual object based upon comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view to obtain a modified three-dimensional virtual object, and displaying the modified three-dimensional virtual object via the display.
    Type: Application
    Filed: October 20, 2016
    Publication date: September 21, 2017
    Inventors: Megan Ann Lindsay, Michael Scavezze, Aaron Daniel Krauss, Michael Thomas, Richard Wifall, Jeffrey David Smith, Cameron Brown, Charlene Jeune, Cheyne Rory Quin Mathey-Owens
  • Patent number: 9761057
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: September 12, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Thomas George Salter, Ben Sugden, Daniel Deptford, Robert Crocco, Jr., Brian Keane, Laura Massey, Alex Kipman, Peter Tobias Kinnebrew, Nicholas Kamuda, Zachary Quarles, Michael Scavezze, Ryan Hastings, Cameron Brown, Tony Ambrus, Jason Scott, John Bevis, Jamie B. Kirschenbaum, Nicholas Gervase Fajt, Michael Klucher, Relja Markovic, Stephen Latta, Daniel McCulloch
  • Patent number: 9754420
    Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.
    Type: Grant
    Filed: September 12, 2016
    Date of Patent: September 5, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger
  • Publication number: 20170230641
    Abstract: Sensor fusion is utilized in an electronic device such as a head mounted display (HMD) device that has a sensor package equipped with different sensors so that information that is supplemental to captured 2D images of objects or scenes in a real world environment may be utilized to determine an optimized transform of image stereo-pairs and to discard erroneous data that would otherwise prevent successful scans used for construction of a 3D model in, for example, virtual world applications. Such supplemental information can include one or more of world location, world rotation, image data from an extended field of view (FOV), or depth map data.
    Type: Application
    Filed: February 8, 2016
    Publication date: August 10, 2017
    Inventors: Michael Scavezze, Arthur Tomlin, Rui Cai, Zhiwei Li
  • Patent number: 9652892
    Abstract: Various embodiments relating to creating a virtual shadow of an object in an image displayed with a see-through display are provided. In one embodiment, an image of a virtual object may be displayed with the see-through display. The virtual object may appear in front of a real-world background when viewed through the see-through display. A relative brightness of the real-world background around a virtual shadow of the virtual object may be increased when viewed through the see-through display. The virtual shadow may appear to result from a spotlight that is fixed relative to a vantage point of the see-through display.
    Type: Grant
    Filed: October 29, 2013
    Date of Patent: May 16, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Arthur Tomlin, Tony Ambrus, Ron Amador-Leon, Nicholas Gervase Fajt, Ryan Hastings, Matthew G. Kaplan, Michael Scavezze, Daniel McCulloch
  • Publication number: 20170069143
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.
    Type: Application
    Filed: November 21, 2016
    Publication date: March 9, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Thomas George Salter, Ben Sugden, Daniel Deptford, Robert Crocco, JR., Brian Keane, Laura Massey, Alex Kipman, Peter Tobias Kinnebrew, Nicholas Kamuda, Zachary Quarles, Michael Scavezze, Ryan Hastings, Cameron Brown, Tony Ambrus, Jason Scott, John Bevis, Jamie B. Kirschenbaum, Nicholas Gervase Fajt, Michael Klucher, Relja Markovic, Stephen Latta, Daniel McCulloch
  • Publication number: 20170052595
    Abstract: Disclosed are techniques for performing undo operations on holographic objects in an immersive 3D visual environment. A display system allows the user to undo a given user operation performed on a particular selected holographic object without affecting any other holographic objects, based on a user's gaze and/or other user input. The technique can be implemented in conjunction with a scrollable visual “timeline” in which multiple past states of the display environment are displayed to the user and are selectable by the user as the target state of the revert operation. Also disclosed is a technique for partially undoing a single continuous user action in a holographic display system.
    Type: Application
    Filed: August 21, 2015
    Publication date: February 23, 2017
    Inventors: Adam Gabriel Poulos, Johanna Dy Lynn, Michael Scavezze, Daniel Joseph McCulloch
  • Publication number: 20170004655
    Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.
    Type: Application
    Filed: September 12, 2016
    Publication date: January 5, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger