Patents by Inventor Thomas Matthew GABLE

Thomas Matthew GABLE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240329751
    Abstract: This document relates to employing tongue gestures to control a computing device, and training machine learning models to detect tongue gestures. One example relates to a method or technique that can include receiving one or more motion signals from an inertial sensor. The method or technique can also include detecting a tongue gesture based at least on the one or more motion signals, and outputting the tongue gesture.
    Type: Application
    Filed: May 21, 2024
    Publication date: October 3, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Raymond Michael WINTERS, IV, Tan GEMICIOGLU, Thomas Matthew GABLE, Yu-Te WANG, Ivan Jelev TASHEV
  • Publication number: 20240118785
    Abstract: The techniques disclosed herein enable systems to translate three-dimensional experiences into user accessible experiences to improve accessibility for users with disabilities. This is accomplished by extracting components from a three-dimensional environment such as user avatars and furniture. The components are organized into component groups based on shared attributes. The component groups are subsequently organized into a flow hierarchy. The flow hierarchy is then presented to the user in an accessibility environment that enables interoperability with various accessibility tools such as screen readers, simplified keyboard inputs, and the like. Selecting a component group, and subsequently, a component through the accessibility environment accordingly invokes functionality within the three-dimensional environment. In this way, users with disabilities are empowered to fully interact with three-dimensional experiences.
    Type: Application
    Filed: March 30, 2023
    Publication date: April 11, 2024
    Inventors: Brett D. HUMPHREY, Kian Chai NG, Thomas Matthew GABLE, Amichai CHARNOFF, Martin GRAYSON, Rita Faia MARQUES, Cecily Peregrine Borgatti MORRISON, Harshadha BALASUBRAMANIAN
  • Publication number: 20240085985
    Abstract: This document relates to employing tongue gestures to control a computing device, and training machine learning models to detect tongue gestures. One example relates to a method or technique that can include receiving one or more motion signals from an inertial sensor. The method or technique can also include detecting a tongue gesture based at least on the one or more motion signals, and outputting the tongue gesture.
    Type: Application
    Filed: December 6, 2022
    Publication date: March 14, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Raymond Michael WINTERS, IV, Tan GEMICIOGLU, Thomas Matthew GABLE, Yu-Te WANG, Ivan Jelev TASHEV
  • Publication number: 20220253199
    Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
    Type: Application
    Filed: April 28, 2022
    Publication date: August 11, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Joshua Kyle NEFF, Alton KWOK
  • Publication number: 20210383594
    Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
    Type: Application
    Filed: August 23, 2021
    Publication date: December 9, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Jason Michael RAY, Sophie STELLMACH, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Kevin John APPEL, Jamie Bryant KIRSCHENBAUM
  • Publication number: 20200225830
    Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
    Type: Application
    Filed: March 25, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Joshua Kyle NEFF, Alton KWOK
  • Publication number: 20200226814
    Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Jason Michael RAY, Sophie STELLMACH, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Kevin John APPEL, Jamie Bryant KIRSCHENBAUM
  • Publication number: 20200225758
    Abstract: A method for augmenting a two-stage hand gesture input comprises receiving hand tracking data for a hand of a user. A gesture recognition machine recognizes that the user has performed a first-stage gesture based on one or more parameters derived from the received hand tracking data satisfying first-stage gesture criteria. An affordance cueing a second-stage gesture is provided to the user responsive to recognizing the first-stage gesture. The gesture recognition machine recognizes that the user has performed the second-stage gesture based on one or more parameters derived from the received hand tracking data satisfying second-stage gesture criteria. A graphical user interface element is displayed responsive to recognizing the second-stage gesture.
    Type: Application
    Filed: March 26, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Chuan QIN, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Joshua Kyle NEFF, Jamie Bryant KIRSCHENBAUM, Neil Richard KRONLAGE
  • Publication number: 20200012341
    Abstract: A method for improving user interaction with a virtual environment includes presenting the virtual environment to a user on a display, measuring a gaze location of a user's gaze relative to the virtual environment, casting an input ray from an input device, measuring an input ray location at a distal point of the input ray, and snapping a presented ray location to the gaze location when the input ray location is within a snap threshold distance of the input ray location.
    Type: Application
    Filed: July 9, 2018
    Publication date: January 9, 2020
    Inventors: Sophie STELLMACH, Sheng Kai TANG, Casey Leon MEEKHOF, Julia SCHWARZ, Nahil Tawfik SHARKASI, Thomas Matthew GABLE