Patents by Inventor Thomas Matthew GABLE
Thomas Matthew GABLE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240329751Abstract: This document relates to employing tongue gestures to control a computing device, and training machine learning models to detect tongue gestures. One example relates to a method or technique that can include receiving one or more motion signals from an inertial sensor. The method or technique can also include detecting a tongue gesture based at least on the one or more motion signals, and outputting the tongue gesture.Type: ApplicationFiled: May 21, 2024Publication date: October 3, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Raymond Michael WINTERS, IV, Tan GEMICIOGLU, Thomas Matthew GABLE, Yu-Te WANG, Ivan Jelev TASHEV
-
Patent number: 12019808Abstract: This document relates to employing tongue gestures to control a computing device, and training machine learning models to detect tongue gestures. One example relates to a method or technique that can include receiving one or more motion signals from an inertial sensor. The method or technique can also include detecting a tongue gesture based at least on the one or more motion signals, and outputting the tongue gesture.Type: GrantFiled: December 6, 2022Date of Patent: June 25, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Raymond Michael Winters, IV, Tan Gemicioglu, Thomas Matthew Gable, Yu-Te Wang, Ivan Jelev Tashev
-
Publication number: 20240118785Abstract: The techniques disclosed herein enable systems to translate three-dimensional experiences into user accessible experiences to improve accessibility for users with disabilities. This is accomplished by extracting components from a three-dimensional environment such as user avatars and furniture. The components are organized into component groups based on shared attributes. The component groups are subsequently organized into a flow hierarchy. The flow hierarchy is then presented to the user in an accessibility environment that enables interoperability with various accessibility tools such as screen readers, simplified keyboard inputs, and the like. Selecting a component group, and subsequently, a component through the accessibility environment accordingly invokes functionality within the three-dimensional environment. In this way, users with disabilities are empowered to fully interact with three-dimensional experiences.Type: ApplicationFiled: March 30, 2023Publication date: April 11, 2024Inventors: Brett D. HUMPHREY, Kian Chai NG, Thomas Matthew GABLE, Amichai CHARNOFF, Martin GRAYSON, Rita Faia MARQUES, Cecily Peregrine Borgatti MORRISON, Harshadha BALASUBRAMANIAN
-
Publication number: 20240085985Abstract: This document relates to employing tongue gestures to control a computing device, and training machine learning models to detect tongue gestures. One example relates to a method or technique that can include receiving one or more motion signals from an inertial sensor. The method or technique can also include detecting a tongue gesture based at least on the one or more motion signals, and outputting the tongue gesture.Type: ApplicationFiled: December 6, 2022Publication date: March 14, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Raymond Michael WINTERS, IV, Tan GEMICIOGLU, Thomas Matthew GABLE, Yu-Te WANG, Ivan Jelev TASHEV
-
Patent number: 11703994Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.Type: GrantFiled: April 28, 2022Date of Patent: July 18, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Joshua Kyle Neff, Alton Kwok
-
Patent number: 11620000Abstract: The techniques disclosed herein provide systems that can control the invocation of precision input mode. A system can initially utilize a first input device, such as a head-mounted display device monitoring the eye gaze direction of a user to control the location of an input target. When one or more predetermined input gestures are detected, the system can then invoke a precision mode that transitions the control of the input target from the first input device to a second input device. The second device can include another input device utilizing different input modalities, such as a sensor detecting one or more hand gestures of the user. The predetermined input gestures can include a fixation input gesture, voice commands, or other gestures that may include the use of a user's hands or head. By controlling the invocation of precision input mode using specific gestures, a system can mitigate device coordination issues.Type: GrantFiled: March 31, 2022Date of Patent: April 4, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Sophie Stellmach, Julia Schwarz, Erian Vazquez, Kristian Jose Davila, Thomas Matthew Gable, Adam Behringer
-
Patent number: 11461955Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.Type: GrantFiled: August 23, 2021Date of Patent: October 4, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Jason Michael Ray, Sophie Stellmach, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Kevin John Appel, Jamie Bryant Kirschenbaum
-
Publication number: 20220253199Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.Type: ApplicationFiled: April 28, 2022Publication date: August 11, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Joshua Kyle NEFF, Alton KWOK
-
Patent number: 11320957Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.Type: GrantFiled: March 25, 2019Date of Patent: May 3, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Joshua Kyle Neff, Alton Kwok
-
Patent number: 11294472Abstract: A method for augmenting a two-stage hand gesture input comprises receiving hand tracking data for a hand of a user. A gesture recognition machine recognizes that the user has performed a first-stage gesture based on one or more parameters derived from the received hand tracking data satisfying first-stage gesture criteria. An affordance cueing a second-stage gesture is provided to the user responsive to recognizing the first-stage gesture. The gesture recognition machine recognizes that the user has performed the second-stage gesture based on one or more parameters derived from the received hand tracking data satisfying second-stage gesture criteria. A graphical user interface element is displayed responsive to recognizing the second-stage gesture.Type: GrantFiled: March 26, 2019Date of Patent: April 5, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Chuan Qin, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Joshua Kyle Neff, Jamie Bryant Kirschenbaum, Neil Richard Kronlage
-
Publication number: 20210383594Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.Type: ApplicationFiled: August 23, 2021Publication date: December 9, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Sheng Kai TANG, Julia SCHWARZ, Jason Michael RAY, Sophie STELLMACH, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Kevin John APPEL, Jamie Bryant KIRSCHENBAUM
-
Patent number: 11107265Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.Type: GrantFiled: March 11, 2019Date of Patent: August 31, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Jason Michael Ray, Sophie Stellmach, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Kevin John Appel, Jamie Bryant Kirschenbaum
-
Patent number: 10890967Abstract: A method for improving user interaction with a virtual environment includes presenting the virtual environment to a user on a display, measuring a gaze location of a user's gaze relative to the virtual environment, casting an input ray from an input device, measuring an input ray location at a distal point of the input ray, and snapping a presented ray location to the gaze location when the input ray location is within a snap threshold distance of the input ray location.Type: GrantFiled: July 9, 2018Date of Patent: January 12, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Sophie Stellmach, Sheng Kai Tang, Casey Leon Meekhof, Julia Schwarz, Nahil Tawfik Sharkasi, Thomas Matthew Gable
-
Publication number: 20200226814Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.Type: ApplicationFiled: March 11, 2019Publication date: July 16, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Sheng Kai TANG, Julia SCHWARZ, Jason Michael RAY, Sophie STELLMACH, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Kevin John APPEL, Jamie Bryant KIRSCHENBAUM
-
Publication number: 20200225758Abstract: A method for augmenting a two-stage hand gesture input comprises receiving hand tracking data for a hand of a user. A gesture recognition machine recognizes that the user has performed a first-stage gesture based on one or more parameters derived from the received hand tracking data satisfying first-stage gesture criteria. An affordance cueing a second-stage gesture is provided to the user responsive to recognizing the first-stage gesture. The gesture recognition machine recognizes that the user has performed the second-stage gesture based on one or more parameters derived from the received hand tracking data satisfying second-stage gesture criteria. A graphical user interface element is displayed responsive to recognizing the second-stage gesture.Type: ApplicationFiled: March 26, 2019Publication date: July 16, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Chuan QIN, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Joshua Kyle NEFF, Jamie Bryant KIRSCHENBAUM, Neil Richard KRONLAGE
-
Publication number: 20200225830Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.Type: ApplicationFiled: March 25, 2019Publication date: July 16, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Joshua Kyle NEFF, Alton KWOK
-
Publication number: 20200012341Abstract: A method for improving user interaction with a virtual environment includes presenting the virtual environment to a user on a display, measuring a gaze location of a user's gaze relative to the virtual environment, casting an input ray from an input device, measuring an input ray location at a distal point of the input ray, and snapping a presented ray location to the gaze location when the input ray location is within a snap threshold distance of the input ray location.Type: ApplicationFiled: July 9, 2018Publication date: January 9, 2020Inventors: Sophie STELLMACH, Sheng Kai TANG, Casey Leon MEEKHOF, Julia SCHWARZ, Nahil Tawfik SHARKASI, Thomas Matthew GABLE