Patents by Inventor Roger Sebastian Kevin Sylvan

Roger Sebastian Kevin Sylvan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11024014
    Abstract: A computing device is provided, which includes an input device, a display device, and a processor configured to, at a rendering stage of a rendering pipeline, render visual scene data to a frame buffer, and generate a signed distance field of edges of vector graphic data, and, at a reprojection stage of the rendering pipeline prior to displaying the rendered visual scene, receive post rendering user input via the input device that updates the user perspective, reproject the rendered visual scene data in the frame buffer based on the updated user perspective, reproject data of the signed distance field based on an updated user perspective, evaluate the signed distance field to generate reprojected vector graphic data, and generate a composite image including the reprojected rendered visual scene data and the reprojected graphic data, and display the composite image on the display device.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: June 1, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Roger Sebastian Kevin Sylvan, Phillip Charles Heckinger, Arthur Tomlin, Nikolai Michael Faaland
  • Patent number: 10545900
    Abstract: In various embodiments, methods and systems are provide for detecting a physical configuration of a device based on sensor data from one or more configuration sensors. The physical configuration includes a position of a first display region of the device with respect to a second display region of the device, where the position is physically adjustable. A configuration profile is selected from a plurality of configuration profiles based on the detected physical configuration of the device. Each configuration profile is a representation of at least one respective physical configuration of the device. An interaction mode corresponding to the selected configuration profile is activated, where the interaction mode includes a set of mode input/output (I/O) features available while the interaction mode is active. Device interfaces of the device are managed using at least some mode I/O features in the set of mode I/O features based on the activating of the interaction mode.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: January 28, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Aaron Mackay Burns, Riccardo Giraldi, Christian Klein, Roger Sebastian Kevin Sylvan, John Benjamin George Hesketh, Scott G Wade
  • Publication number: 20180348518
    Abstract: Tracking a user head position detects a change to a new head position and, in response, a remote camera is instructed to move to a next camera position. A camera image frame, having an indication of camera position, is received from the camera. Upon the camera position not aligning with the next camera position, an assembled image frame is formed, using image data from past views, and rendered to appear to the user as if the camera moved in 1:1 alignment with the user's head to the next camera position.
    Type: Application
    Filed: June 5, 2017
    Publication date: December 6, 2018
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alexandre DA VEIGA, Roger Sebastian Kevin SYLVAN, Kenneth Liam KIEMELE, Nikolai Michael FAALAND, Aaron Mackay BURNS
  • Patent number: 10139631
    Abstract: Tracking a user head position detects a change to a new head position and, in response, a remote camera is instructed to move to a next camera position. A camera image frame, having an indication of camera position, is received from the camera. Upon the camera position not aligning with the next camera position, an assembled image frame is formed, using image data from past views, and rendered to appear to the user as if the camera moved in 1:1 alignment with the user's head to the next camera position.
    Type: Grant
    Filed: June 5, 2017
    Date of Patent: November 27, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Alexandre Da Veiga, Roger Sebastian Kevin Sylvan, Kenneth Liam Kiemele, Nikolai Michael Faaland, Aaron Mackay Burns
  • Patent number: 10078367
    Abstract: Embodiments are described herein for determining a stabilization plane to reduce errors that occur when a homographic transformation is applied to a scene including 3D geometry and/or multiple non-coplanar planes. Such embodiments can be used, e.g., when displaying an image on a head mounted display (HMD) device, but are not limited thereto. In an embodiment, a rendered image is generated, a gaze location of a user is determined, and a stabilization plane, associated with a homographic transformation, is determined based on the determined gaze location. This can involve determining, based on the user's gaze location, variables of the homographic transformation that define the stabilization plane. The homographic transformation is applied to the rendered image to thereby generate an updated image, and at least a portion of the updated image is then displayed.
    Type: Grant
    Filed: March 9, 2017
    Date of Patent: September 18, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ashraf Ayman Michail, Roger Sebastian Kevin Sylvan, Quentin Simon Charles Miller, Alex Aben-Athar Kipman
  • Publication number: 20180089131
    Abstract: In various embodiments, methods and systems are provide for detecting a physical configuration of a device based on sensor data from one or more configuration sensors. The physical configuration includes a position of a first display region of the device with respect to a second display region of the device, where the position is physically adjustable. A configuration profile is selected from a plurality of configuration profiles based on the detected physical configuration of the device. Each configuration profile is a representation of at least one respective physical configuration of the device. An interaction mode corresponding to the selected configuration profile is activated, where the interaction mode includes a set of mode input/output (I/O) features available while the interaction mode is active. Device interfaces of the device are managed using at least some mode I/O features in the set of mode I/O features based on the activating of the interaction mode.
    Type: Application
    Filed: September 23, 2016
    Publication date: March 29, 2018
    Inventors: Aaron Mackay Burns, Riccardo Giraldi, Christian Klein, Roger Sebastian Kevin Sylvan, John Benjamin George Hesketh, Scott G. Wade
  • Publication number: 20170372457
    Abstract: A computing device is provided, which includes an input device, a display device, and a processor configured to, at a rendering stage of a rendering pipeline, render visual scene data to a frame buffer, and generate a signed distance field of edges of vector graphic data, and, at a reprojection stage of the rendering pipeline prior to displaying the rendered visual scene, receive post rendering user input via the input device that updates the user perspective, reproject the rendered visual scene data in the frame buffer based on the updated user perspective, reproject data of the signed distance field based on an updated user perspective, evaluate the signed distance field to generate reprojected vector graphic data, and generate a composite image including the reprojected rendered visual scene data and the reprojected graphic data, and display the composite image on the display device.
    Type: Application
    Filed: June 28, 2016
    Publication date: December 28, 2017
    Inventors: Roger Sebastian Kevin Sylvan, Phillip Charles Heckinger, Arthur Tomlin, Nikolai Michael Faaland
  • Patent number: 9846968
    Abstract: A system and method are disclosed for capturing views of a mixed reality environment from various perspectives which can be displayed on a monitor. The system includes one or more physical cameras at user-defined positions within the mixed reality environment. The system renders virtual objects in the mixed reality environment from the perspective of the one or more cameras. Real and virtual objects from the mixed reality environment may then be displayed from the perspective of the one or more cameras on one or more external, 2D monitor for viewing by others.
    Type: Grant
    Filed: June 2, 2015
    Date of Patent: December 19, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Arthur Charles Tomlin, Evan Michael Keibler, Nicholas Gervase Fajt, Brian J. Mount, Gregory Lowell Alt, Jorge Tosar, Jonathan Michael Lyons, Anthony J. Ambrus, Cameron Quinn Egbert, Will Guyman, Jeff W. McGlynn, Jeremy Hance, Roger Sebastian-Kevin Sylvan, Alexander Georg Pfaffe, Dan Kroymann, Erik Andrew Saltwell, Chris Word
  • Patent number: 9779512
    Abstract: Methods for automatically generating a texture exemplar that may be used for rendering virtual objects that appear to be made from the texture exemplar are described. In some embodiments, a head-mounted display device (HMD) may identify a real-world object within an environment, acquire a three-dimensional model of the real-world object, determine a portion of the real-world object from which a texture exemplar is to be generated, capture one or more images of the portion of the real-world object, determine an orientation of the real-world object, and generate the texture exemplar using the one or more images, the three-dimensional model, and the orientation of the real-world object. The HMD may then render and display images of a virtual object such that the virtual object appears to be made from a virtual material associated with the texture exemplar.
    Type: Grant
    Filed: January 29, 2015
    Date of Patent: October 3, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Arthur C. Tomlin, Roger Sebastian-Kevin Sylvan, Dan Kroymann, Cameron G. Brown, Nicholas Gervase Fajt
  • Publication number: 20170177082
    Abstract: Embodiments are described herein for determining a stabilization plane to reduce errors that occur when a homographic transformation is applied to a scene including 3D geometry and/or multiple non-coplanar planes. Such embodiments can be used, e.g., when displaying an image on a head mounted display (HMD) device, but are not limited thereto. In an embodiment, a rendered image is generated, a gaze location of a user is determined, and a stabilization plane, associated with a homographic transformation, is determined based on the determined gaze location. This can involve determining, based on the user's gaze location, variables of the homographic transformation that define the stabilization plane. The homographic transformation is applied to the rendered image to thereby generate an updated image, and at least a portion of the updated image is then displayed.
    Type: Application
    Filed: March 9, 2017
    Publication date: June 22, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Ashraf Ayman Michail, Roger Sebastian Kevin Sylvan, Quentin Simon Charles Miller, Alex Aben-Athar Kipman
  • Patent number: 9652893
    Abstract: Embodiments are described herein for determining a stabilization plane to reduce errors that occur when a homographic transformation is applied to a scene including 3D geometry and/or multiple non-coplanar planes. Such embodiments can be used, e.g., when displaying an image on a head mounted display (HMD) device, but are not limited thereto. In an embodiment, a rendered image is generated, a gaze location of a user is determined, and a stabilization plane, associated with a homographic transformation, is determined based on the determined gaze location. This can involve determining, based on the user's gaze location, variables of the homographic transformation that define the stabilization plane. The homographic transformation is applied to the rendered image to thereby generate an updated image, and at least a portion of the updated image is then displayed.
    Type: Grant
    Filed: April 29, 2014
    Date of Patent: May 16, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ashraf Ayman Michail, Roger Sebastian Kevin Sylvan, Quentin Simon Charles Miller, Alex Aben-Athar Kipman
  • Patent number: 9552060
    Abstract: Methods for enabling hands-free selection of objects within an augmented reality environment are described. In some embodiments, an object may be selected by an end user of a head-mounted display device (HMD) based on detecting a vestibulo-ocular reflex (VOR) with the end user's eyes while the end user is gazing at the object and performing a particular head movement for selecting the object. The object selected may comprise a real object or a virtual object. The end user may select the object by gazing at the object for a first time period and then performing a particular head movement in which the VOR is detected for one or both of the end user's eyes. In one embodiment, the particular head movement may involve the end user moving their head away from a direction of the object at a particular head speed while gazing at the object.
    Type: Grant
    Filed: January 28, 2014
    Date of Patent: January 24, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Anthony J. Ambrus, Adam G. Poulos, Lewey Alec Geselowitz, Dan Kroymann, Arthur C. Tomlin, Roger Sebastian-Kevin Sylvan, Mathew J. Lamb, Brian J. Mount
  • Publication number: 20160225164
    Abstract: Methods for automatically generating a texture exemplar that may be used for rendering virtual objects that appear to be made from the texture exemplar are described. In some embodiments, a head-mounted display device (HMD) may identify a real-world object within an environment, acquire a three-dimensional model of the real-world object, determine a portion of the real-world object from which a texture exemplar is to be generated, capture one or more images of the portion of the real-world object, determine an orientation of the real-world object, and generate the texture exemplar using the one or more images, the three-dimensional model, and the orientation of the real-world object. The HMD may then render and display images of a virtual object such that the virtual object appears to be made from a virtual material associated with the texture exemplar.
    Type: Application
    Filed: January 29, 2015
    Publication date: August 4, 2016
    Inventors: Arthur C. Tomlin, Roger Sebastian-Kevin Sylvan, Dan Kroymann, Cameron G. Brown, Nicholas Gervase Fajt
  • Publication number: 20160210783
    Abstract: A system and method are disclosed for capturing views of a mixed reality environment from various perspectives which can be displayed on a monitor. The system includes one or more physical cameras at user-defined positions within the mixed reality environment. The system renders virtual objects in the mixed reality environment from the perspective of the one or more cameras. Real and virtual objects from the mixed reality environment may then be displayed from the perspective of the one or more cameras on one or more external, 2D monitor for viewing by others.
    Type: Application
    Filed: June 2, 2015
    Publication date: July 21, 2016
    Inventors: Arthur Charles Tomlin, Evan Michael Keibler, Nicholas Gervase Fajt, Brian J. Mount, Gregory Lowell Alt, Jorge Tosar, Jonathan Michael Lyons, Anthony J. Ambrus, Cameron Quinn Egbert, Will Guyman, Jeff W. McGlynn, Jeremy Hance, Roger Sebastian-Kevin Sylvan, Alexander Georg Pfaffe, Dan Kroymann, Erik Andrew Saltwell, Chris Word
  • Publication number: 20160131902
    Abstract: A method of automatically calibrating a head mounted display for a user is disclosed. The method includes automatically calculating an inter-pupillary distance value for the user, comparing the automatically calculated inter-pupillary distance value to a previously determined inter-pupillary distance value, determining if the automatically calculated inter-pupillary distance value matches the preexisting inter-pupillary distance value, and automatically calibrating the head mounted display using calibration data associated with matching previously determined inter-pupillary distance value.
    Type: Application
    Filed: November 12, 2014
    Publication date: May 12, 2016
    Inventors: Anthony J. Ambrus, Stephen G. Latta, Roger Sebastian-Kevin Sylvan, Adam G. Poulos
  • Patent number: 9183676
    Abstract: Technology is described for displaying a collision between objects by an augmented reality display device system. A collision between a real object and a virtual object is identified based on three dimensional space position data of the objects. At least one effect on at least one physical property of the real object is determined based on physical properties of the real object, like a change in surface shape, and physical interaction characteristics of the collision. Simulation image data is generated and displayed simulating the effect on the real object by the augmented reality display. Virtual objects under control of different executing applications can also interact with one another in collisions.
    Type: Grant
    Filed: April 27, 2012
    Date of Patent: November 10, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. McCulloch, Stephen G. Latta, Brian J. Mount, Kevin A. Geisner, Roger Sebastian Kevin Sylvan, Arnulfo Zepeda Navratil, Jason Scott, Jonathan T. Steed, Ben J. Sugden, Britta Silke Hummel, Kyungsuk David Lee, Mark J. Finocchio, Alex Aben-Athar Kipman, Jeffrey N. Margolis
  • Publication number: 20150310665
    Abstract: Embodiments are described herein for determining a stabilization plane to reduce errors that occur when a homographic transformation is applied to a scene including 3D geometry and/or multiple non-coplanar planes. Such embodiments can be used, e.g., when displaying an image on a head mounted display (HMD) device, but are not limited thereto. In an embodiment, a rendered image is generated, a gaze location of a user is determined, and a stabilization plane, associated with a homographic transformation, is determined based on the determined gaze location. This can involve determining, based on the user's gaze location, variables of the homographic transformation that define the stabilization plane. The homographic transformation is applied to the rendered image to thereby generate an updated image, and at least a portion of the updated image is then displayed.
    Type: Application
    Filed: April 29, 2014
    Publication date: October 29, 2015
    Inventors: Ashraf Ayman Michail, Roger Sebastian Kevin Sylvan, Quentin Simon Charles Miller, Alex Aben-Athar Kipman
  • Publication number: 20150212576
    Abstract: Methods for enabling hands-free selection of objects within an augmented reality environment are described. In some embodiments, an object may be selected by an end user of a head-mounted display device (HMD) based on detecting a vestibulo-ocular reflex (VOR) with the end user's eyes while the end user is gazing at the object and performing a particular head movement for selecting the object. The object selected may comprise a real object or a virtual object. The end user may select the object by gazing at the object for a first time period and then performing a particular head movement in which the VOR is detected for one or both of the end user's eyes. In one embodiment, the particular head movement may involve the end user moving their head away from a direction of the object at a particular head speed while gazing at the object.
    Type: Application
    Filed: January 28, 2014
    Publication date: July 30, 2015
    Inventors: Anthony J. Ambrus, Adam G. Poulos, Lewey Alec Geselowitz, Dan Kroymann, Arthur C. Tomlin, Roger Sebastian-Kevin Sylvan, Mathew J. Lamb, Brian J. Mount
  • Publication number: 20130286004
    Abstract: Technology is described for displaying a collision between objects by an augmented reality display device system. A collision between a real object and a virtual object is identified based on three dimensional space position data of the objects. At least one effect on at least one physical property of the real object is determined based on physical properties of the real object, like a change in surface shape, and physical interaction characteristics of the collision. Simulation image data is generated and displayed simulating the effect on the real object by the augmented reality display. Virtual objects under control of different executing applications can also interact with one another in collisions.
    Type: Application
    Filed: April 27, 2012
    Publication date: October 31, 2013
    Inventors: Daniel J. McCulloch, Stephen G. Latta, Brian J. Mount, Kevin A. Geisner, Roger Sebastian Kevin Sylvan, Arnulfo Zepeda Navratil, Jason Scott, Jonathan T. Steed, Ben J. Sugden, Britta Silke Hummel, Kyungsuk David Lee, Mark J. Finocchio, Alex Aben-Athar Kipman, Jeffrey N. Margolis