Patents by Inventor Cheyne Rory Quin Mathey-Owens

Cheyne Rory Quin Mathey-Owens has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10268266
    Abstract: A user may select or interact with objects in a scene using gaze tracking and movement tracking. In some examples, the scene may comprise a virtual reality scene or a mixed reality scene. A user may move an input object in an environment and be facing in a direction towards the movement of the input object. A computing device may use sensors to obtain movement data corresponding to the movement of the input object, and gaze tracking data including to a location of eyes of the user. One or more modules of the computing device may use the movement data and gaze tracking data to determine a three-dimensional selection space in the scene. In some examples, objects included in the three-dimensional selection space may be selected or otherwise interacted with.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: April 23, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Cheyne Rory Quin Mathey-Owens, Arthur Tomlin, Marcus Ghaly
  • Publication number: 20190102953
    Abstract: Examples disclosed relate to displaying virtual objects. One example provides, on a display device comprising a camera and a display, a method comprising acquiring, via the camera, image data imaging an environment, receiving a user input requesting display of a three-dimensional virtual object, comparing dimensional information for the three-dimensional virtual object to dimensional information for a field of view of the display device, modifying the three-dimensional virtual object based upon comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view to obtain a modified three-dimensional virtual object, and displaying the modified three-dimensional virtual object via the display.
    Type: Application
    Filed: November 16, 2018
    Publication date: April 4, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Megan Ann Lindsay, Michael Scavezze, Aaron Daniel Krauss, Michael Thomas, Richard Wifall, Jeffrey David Smith, Cameron Brown, Charlene Jeune, Cheyne Rory Quin Mathey-Owens
  • Patent number: 10176641
    Abstract: Examples disclosed relate to displaying virtual objects. One example provides, on a display device comprising a camera and a display, a method comprising acquiring, via the camera, image data imaging an environment, receiving a user input requesting display of a three-dimensional virtual object, comparing dimensional information for the three-dimensional virtual object to dimensional information for a field of view of the display device, modifying the three-dimensional virtual object based upon comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view to obtain a modified three-dimensional virtual object, and displaying the modified three-dimensional virtual object via the display.
    Type: Grant
    Filed: October 20, 2016
    Date of Patent: January 8, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Megan Ann Lindsay, Michael Scavezze, Aaron Daniel Krauss, Michael Thomas, Richard Wifall, Jeffrey David Smith, Cameron Brown, Charlene Jeune, Cheyne Rory Quin Mathey-Owens
  • Patent number: 10068376
    Abstract: Examples are disclosed herein that relate to the display of mixed reality imagery. One example provides a mixed reality computing device comprising an image sensor, a display device, a storage device comprising instructions, and a processor. The instructions are executable to receive an image of a physical environment, store the image, render a three-dimensional virtual model, form a mixed reality thumbnail image by compositing a view of the three-dimensional virtual model and the image, and display the mixed reality thumbnail image. The instructions are further executable to receive a user input updating the three-dimensional virtual model, render the updated three-dimensional virtual model, update the mixed reality thumbnail image by compositing a view of the updated three-dimensional virtual model and the image of the physical environment, and display the mixed reality thumbnail image including the updated three-dimensional virtual model composited with the image of the physical environment.
    Type: Grant
    Filed: January 11, 2016
    Date of Patent: September 4, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jeff Smith, Cameron Graeme Brown, Marcus Ghaly, Andrei A. Borodin, Jonathan Gill, Cheyne Rory Quin Mathey-Owens, Andrew Jackson
  • Patent number: 10037626
    Abstract: Motion and/or rotation of an input mechanism can be tracked and/or analyzed to determine limits on a user's range of motion and/or a user's range of rotation in three-dimensional space. The user's range of motion and/or the user's range of rotation in three-dimensional space may be limited by a personal restriction for the user (e.g., a broken arm). The user's range of motion and/or the user's range of rotation in three-dimensional space may additionally or alternatively be limited by an environmental restriction (e.g., a physical object in a room). Accordingly, the techniques described herein can take steps to accommodate the personal restriction and/or the environmental restriction thereby optimizing user interactions involving the input mechanism and a virtual object.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: July 31, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adam Gabriel Poulos, Cameron Graeme Brown, Andrew Austin Jackson, Cheyne Rory Quin Mathey-Owens, Michael Robert Thomas, Arthur Tomlin
  • Patent number: 9972134
    Abstract: Techniques described herein dynamically adapt an amount of smoothing that is applied to signals of a device (e.g., positions and/or orientations of an input mechanism, positions and/or orientations of an output mechanism) based on a determined distance between an object and the device, or based on a determined distance between the object and another device (e.g., a head-mounted device). The object can comprise one of a virtual object presented on a display of the head-mounted device or a real-world object within a view of the user. The object can be considered a “target” object based on a determination that a user is focusing on, or targeting, the object. For example, the head-mounted device or other devices can sense data associated with an eye gaze of a user and can determine, based on the sensed data, that the user is looking at the target object.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: May 15, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Cheyne Rory Quin Mathey-Owens, Andrew Austin Jackson
  • Publication number: 20180005443
    Abstract: Motion and/or rotation of an input mechanism can be tracked and/or analyzed to determine limits on a user's range of motion and/or a user's range of rotation in three-dimensional space. The user's range of motion and/or the user's range of rotation in three-dimensional space may be limited by a personal restriction for the user (e.g., a broken arm). The user's range of motion and/or the user's range of rotation in three-dimensional space may additionally or alternatively be limited by an environmental restriction (e.g., a physical object in a room). Accordingly, the techniques described herein can take steps to accommodate the personal restriction and/or the environmental restriction thereby optimizing user interactions involving the input mechanism and a virtual object.
    Type: Application
    Filed: June 30, 2016
    Publication date: January 4, 2018
    Inventors: Adam Gabriel Poulos, Cameron Graeme Brown, Andrew Austin Jackson, Cheyne Rory Quin Mathey-Owens, Michael Robert Thomas, Arthur Tomlin
  • Publication number: 20180004283
    Abstract: A user may select or interact with objects in a scene using gaze tracking and movement tracking. In some examples, the scene may comprise a virtual reality scene or a mixed reality scene. A user may move an input object in an environment and be facing in a direction towards the movement of the input object. A computing device may use sensors to obtain movement data corresponding to the movement of the input object, and gaze tracking data including to a location of eyes of the user. One or more modules of the computing device may use the movement data and gaze tracking data to determine a three-dimensional selection space in the scene. In some examples, objects included in the three-dimensional selection space may be selected or otherwise interacted with.
    Type: Application
    Filed: June 29, 2016
    Publication date: January 4, 2018
    Inventors: Cheyne Rory Quin Mathey-Owens, Arthur Tomlin, Marcus Ghaly
  • Publication number: 20180005438
    Abstract: Techniques described herein dynamically adapt an amount of smoothing that is applied to signals of a device (e.g., positions and/or orientations of an input mechanism, positions and/or orientations of an output mechanism) based on a determined distance between an object and the device, or based on a determined distance between the object and another device (e.g., a head-mounted device). The object can comprise one of a virtual object presented on a display of the head-mounted device or a real-world object within a view of the user. The object can be considered a “target” object based on a determination that a user is focusing on, or targeting, the object. For example, the head-mounted device or other devices can sense data associated with an eye gaze of a user and can determine, based on the sensed data, that the user is looking at the target object.
    Type: Application
    Filed: June 30, 2016
    Publication date: January 4, 2018
    Inventors: Cheyne Rory Quin Mathey-Owens, Andrew Austin Jackson
  • Publication number: 20170270715
    Abstract: Examples disclosed relate to displaying virtual objects. One example provides, on a display device comprising a camera and a display, a method comprising acquiring, via the camera, image data imaging an environment, receiving a user input requesting display of a three-dimensional virtual object, comparing dimensional information for the three-dimensional virtual object to dimensional information for a field of view of the display device, modifying the three-dimensional virtual object based upon comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view to obtain a modified three-dimensional virtual object, and displaying the modified three-dimensional virtual object via the display.
    Type: Application
    Filed: October 20, 2016
    Publication date: September 21, 2017
    Inventors: Megan Ann Lindsay, Michael Scavezze, Aaron Daniel Krauss, Michael Thomas, Richard Wifall, Jeffrey David Smith, Cameron Brown, Charlene Jeune, Cheyne Rory Quin Mathey-Owens
  • Publication number: 20170200312
    Abstract: Examples are disclosed herein that relate to the display of mixed reality imagery. One example provides a mixed reality computing device comprising an image sensor, a display device, a storage device comprising instructions, and a processor. The instructions are executable to receive an image of a physical environment, store the image, render a three-dimensional virtual model, form a mixed reality thumbnail image by compositing a view of the three-dimensional virtual model and the image, and display the mixed reality thumbnail image. The instructions are further executable to receive a user input updating the three-dimensional virtual model, render the updated three-dimensional virtual model, update the mixed reality thumbnail image by compositing a view of the updated three-dimensional virtual model and the image of the physical environment, and display the mixed reality thumbnail image including the updated three-dimensional virtual model composited with the image of the physical environment.
    Type: Application
    Filed: January 11, 2016
    Publication date: July 13, 2017
    Inventors: Jeff Smith, Cameron Graeme Brown, Marcus Ghaly, Andrei A. Borodin, Jonathan Gill, Cheyne Rory Quin Mathey-Owens, Andrew Jackson