Patents by Inventor Phillip Charles Heckinger

Phillip Charles Heckinger has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11024014
    Abstract: A computing device is provided, which includes an input device, a display device, and a processor configured to, at a rendering stage of a rendering pipeline, render visual scene data to a frame buffer, and generate a signed distance field of edges of vector graphic data, and, at a reprojection stage of the rendering pipeline prior to displaying the rendered visual scene, receive post rendering user input via the input device that updates the user perspective, reproject the rendered visual scene data in the frame buffer based on the updated user perspective, reproject data of the signed distance field based on an updated user perspective, evaluate the signed distance field to generate reprojected vector graphic data, and generate a composite image including the reprojected rendered visual scene data and the reprojected graphic data, and display the composite image on the display device.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: June 1, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Roger Sebastian Kevin Sylvan, Phillip Charles Heckinger, Arthur Tomlin, Nikolai Michael Faaland
  • Patent number: 10510190
    Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: December 17, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger
  • Publication number: 20180012412
    Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.
    Type: Application
    Filed: September 1, 2017
    Publication date: January 11, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger
  • Publication number: 20170372457
    Abstract: A computing device is provided, which includes an input device, a display device, and a processor configured to, at a rendering stage of a rendering pipeline, render visual scene data to a frame buffer, and generate a signed distance field of edges of vector graphic data, and, at a reprojection stage of the rendering pipeline prior to displaying the rendered visual scene, receive post rendering user input via the input device that updates the user perspective, reproject the rendered visual scene data in the frame buffer based on the updated user perspective, reproject data of the signed distance field based on an updated user perspective, evaluate the signed distance field to generate reprojected vector graphic data, and generate a composite image including the reprojected rendered visual scene data and the reprojected graphic data, and display the composite image on the display device.
    Type: Application
    Filed: June 28, 2016
    Publication date: December 28, 2017
    Inventors: Roger Sebastian Kevin Sylvan, Phillip Charles Heckinger, Arthur Tomlin, Nikolai Michael Faaland
  • Patent number: 9754420
    Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.
    Type: Grant
    Filed: September 12, 2016
    Date of Patent: September 5, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger
  • Publication number: 20170004655
    Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.
    Type: Application
    Filed: September 12, 2016
    Publication date: January 5, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger
  • Patent number: 9443354
    Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.
    Type: Grant
    Filed: April 29, 2013
    Date of Patent: September 13, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger
  • Publication number: 20140320389
    Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.
    Type: Application
    Filed: April 29, 2013
    Publication date: October 30, 2014
    Inventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger
  • Publication number: 20140240351
    Abstract: Embodiments that relate to providing motion amplification to a virtual environment are disclosed. For example, in one disclosed embodiment a mixed reality augmentation program receives from a head-mounted display device motion data that corresponds to motion of a user in a physical environment. The program presents via the display device the virtual environment in motion in a principal direction, with the principal direction motion being amplified by a first multiplier as compared to the motion of the user in a corresponding principal direction. The program also presents the virtual environment in motion in a secondary direction, where the secondary direction motion is amplified by a second multiplier as compared to the motion of the user in a corresponding secondary direction, and the second multiplier is less than the first multiplier.
    Type: Application
    Filed: February 27, 2013
    Publication date: August 28, 2014
    Inventors: Michael Scavezze, Nicholas Gervase Fajt, Arnulfo Zepeda Navratil, Jason Scott, Adam Benjamin Smith-Kipnis, Brian Mount, John Bevis, Cameron Brown, Tony Ambrus, Phillip Charles Heckinger, Dan Kroymann, Matthew G. Kaplan, Aaron Krauss