Patents by Inventor Casey Leon Meekhof
Casey Leon Meekhof has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230418390Abstract: A method for evaluating gesture input comprises receiving input data for sequential data frames, including hand tracking data for hands of a user. A first neural network is trained to recognize features indicative of subsequent gesture interactions and configured to evaluate input data for a sequence of data frames and to output an indication of a likelihood of the user performing gesture interactions during a predetermined window of data frames. A second neural network is trained to recognize features indicative of whether the user is currently performing one or more gesture interactions and configured to adjust parameters for gesture interaction recognition during the predetermined window based on the indicated likelihood. The second neural network evaluates the predetermined window for performed gesture interactions based on the adjusted parameters, and outputs a signal as to whether the user is performing one or more gesture interactions during the predetermined window.Type: ApplicationFiled: September 8, 2023Publication date: December 28, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Julia SCHWARZ, Bugra TEKIN, Sophie STELLMACH, Erian VAZQUEZ, Casey Leon MEEKHOF, Fabian GOBEL
-
Patent number: 11768544Abstract: A method for evaluating gesture input comprises receiving input data for sequential data frames, including hand tracking data for hands of a user. A first neural network is trained to recognize features indicative of subsequent gesture interactions and configured to evaluate input data for a sequence of data frames and to output an indication of a likelihood of the user performing gesture interactions during a predetermined window of data frames. A second neural network is trained to recognize features indicative of whether the user is currently performing one or more gesture interactions and configured to adjust parameters for gesture interaction recognition during the predetermined window based on the indicated likelihood. The second neural network evaluates the predetermined window for performed gesture interactions based on the adjusted parameters, and outputs a signal as to whether the user is performing one or more gesture interactions during the predetermined window.Type: GrantFiled: February 1, 2022Date of Patent: September 26, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Julia Schwarz, Bugra Tekin, Sophie Stellmach, Erian Vazquez, Casey Leon Meekhof, Fabian Gobel
-
Publication number: 20230244316Abstract: A method for evaluating gesture input comprises receiving input data for sequential data frames, including hand tracking data for hands of a user. A first neural network is trained to recognize features indicative of subsequent gesture interactions and configured to evaluate input data for a sequence of data frames and to output an indication of a likelihood of the user performing gesture interactions during a predetermined window of data frames. A second neural network is trained to recognize features indicative of whether the user is currently performing one or more gesture interactions and configured to adjust parameters for gesture interaction recognition during the predetermined window based on the indicated likelihood. The second neural network evaluates the predetermined window for performed gesture interactions based on the adjusted parameters, and outputs a signal as to whether the user is performing one or more gesture interactions during the predetermined window.Type: ApplicationFiled: February 1, 2022Publication date: August 3, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Julia SCHWARZ, Bugra TEKIN, Sophie STELLMACH, Erian VAZQUEZ, Casey Leon MEEKHOF, Fabian GOBEL
-
Patent number: 11703994Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.Type: GrantFiled: April 28, 2022Date of Patent: July 18, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Joshua Kyle Neff, Alton Kwok
-
Patent number: 11461955Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.Type: GrantFiled: August 23, 2021Date of Patent: October 4, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Jason Michael Ray, Sophie Stellmach, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Kevin John Appel, Jamie Bryant Kirschenbaum
-
Publication number: 20220300071Abstract: Systems and methods are provided for selectively enabling or disabling control rays in mixed-reality environments. A system presents a mixed-reality environment to a user with a mixed-reality display device, displays a control ray as a hologram of a line extending away from the user control within the mixed-reality environment, and obtains a control ray activation variable associated with a user control. The control ray activation variable includes a velocity or acceleration of the user control. After displaying the control ray within the mixed-reality environment, and in response to determining that, the control ray activation variable exceeds a predetermined threshold, the system selectively disables display of the control ray within the mixed-reality environment.Type: ApplicationFiled: June 8, 2022Publication date: September 22, 2022Inventors: Julia SCHWARZ, Sheng Kai TANG, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Sophie STELLMACH
-
Patent number: 11423585Abstract: Examples that relate to virtual controls in a mixed reality experience are described. One example provides a method comprising, via a mixed reality display device, displaying mixed reality content including a representation of a virtual control, and receiving sensor data indicating motion of a user digit. The method further comprises, based at least in part on the sensor data, determining a velocity of the user digit, and responsive to determining that the velocity of the user digit relative to a surface corresponding to the virtual control satisfies a velocity-based selection condition, triggering the virtual control.Type: GrantFiled: November 30, 2020Date of Patent: August 23, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Casey Leon Meekhof, Kyle Nicholas San, Julia Schwarz
-
Publication number: 20220253199Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.Type: ApplicationFiled: April 28, 2022Publication date: August 11, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Joshua Kyle NEFF, Alton KWOK
-
Patent number: 11397463Abstract: Systems and methods are provided for selectively enabling or disabling control rays in mixed-reality environments. In some instances, a mixed-reality display device presents a mixed-reality environment to a user which includes one or more holograms. The display device then detects a user gesture input associated with a user control (which may include a part of the user's body) during presentation of the mixed-reality environment. In response to detecting the user gesture, the display device selectively generates and displays a corresponding control ray as a hologram rendered by the display device extending away from the user control within the mixed-reality environment. Gestures may also be detected for selectively disabling control rays so that they are no longer rendered.Type: GrantFiled: March 8, 2019Date of Patent: July 26, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Julia Schwarz, Sheng Kai Tang, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Sophie Stellmach
-
Publication number: 20220172405Abstract: Examples that relate to virtual controls in a mixed reality experience are described. One example provides a method comprising, via a mixed reality display device, displaying mixed reality content including a representation of a virtual control, and receiving sensor data indicating motion of a user digit. The method further comprises, based at least in part on the sensor data, determining a velocity of the user digit, and responsive to determining that the velocity of the user digit relative to a surface corresponding to the virtual control satisfies a velocity-based selection condition, triggering the virtual control.Type: ApplicationFiled: November 30, 2020Publication date: June 2, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Casey Leon MEEKHOF, Kyle Nicholas SAN, Julia SCHWARZ
-
Patent number: 11320911Abstract: Systems and methods are provided for detecting user-object interaction in mixed-reality environments. A mixed-reality system detects a controller gesture with an associated controller orientation in the mixed-reality environment. The mixed-reality system then determines an interaction region for the controller gesture and identifies one or more virtual objects within the interaction region. The virtual objects each have an associated orientation affinity. Subsequently, the mixed-reality system determines an orientation similarity score between the controller orientation and the orientation affinity for each virtual object within the interaction region. In response to determining that at least one orientation similarity score exceeds a predetermined threshold, the mixed-reality system executes an interaction between the controller and the virtual object that has the greatest orientation similarity score.Type: GrantFiled: March 8, 2019Date of Patent: May 3, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Julia Schwarz, Jason Michael Ray, Casey Leon Meekhof
-
Patent number: 11320957Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.Type: GrantFiled: March 25, 2019Date of Patent: May 3, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Joshua Kyle Neff, Alton Kwok
-
Publication number: 20220121281Abstract: A method for improving user interaction with a virtual environment includes measuring a first position of a user's gaze relative to the virtual environment, receiving a system engagement input, presenting a guidance cursor at the first position, receiving a target engagement input and decoupling the guidance cursor from the user's gaze, receiving a movement input, and translating the guidance cursor based on the movement input.Type: ApplicationFiled: December 30, 2021Publication date: April 21, 2022Inventors: Sophie STELLMACH, Casey Leon MEEKHOF, James R. TICHENOR, David Bruce LINDSAY
-
Patent number: 11294472Abstract: A method for augmenting a two-stage hand gesture input comprises receiving hand tracking data for a hand of a user. A gesture recognition machine recognizes that the user has performed a first-stage gesture based on one or more parameters derived from the received hand tracking data satisfying first-stage gesture criteria. An affordance cueing a second-stage gesture is provided to the user responsive to recognizing the first-stage gesture. The gesture recognition machine recognizes that the user has performed the second-stage gesture based on one or more parameters derived from the received hand tracking data satisfying second-stage gesture criteria. A graphical user interface element is displayed responsive to recognizing the second-stage gesture.Type: GrantFiled: March 26, 2019Date of Patent: April 5, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Chuan Qin, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Joshua Kyle Neff, Jamie Bryant Kirschenbaum, Neil Richard Kronlage
-
Publication number: 20210383594Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.Type: ApplicationFiled: August 23, 2021Publication date: December 9, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Sheng Kai TANG, Julia SCHWARZ, Jason Michael RAY, Sophie STELLMACH, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Kevin John APPEL, Jamie Bryant KIRSCHENBAUM
-
Patent number: 11107265Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.Type: GrantFiled: March 11, 2019Date of Patent: August 31, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Jason Michael Ray, Sophie Stellmach, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Kevin John Appel, Jamie Bryant Kirschenbaum
-
Patent number: 10969937Abstract: Systems and methods are provided for controlling the position of an interactive movable menu in a mixed-reality environment. In some instances, a mixed-reality display device presents a mixed-reality environment to a user. The mixed-reality device then detects a first gesture associated with a user controller while presenting the mixed-reality environment and, in response to the first gesture, triggers a display of an interactive movable menu within the mixed-reality environment as a tethered hologram that is dynamically moved within the mixed-reality environment relative to and corresponding with movement of the user controller within the mixed-reality environment. Then, in response to a second detected gesture, the mixed-reality device selectively locks a display of the interactive movable menu at a fixed position that is not tethered to the user controller.Type: GrantFiled: March 11, 2019Date of Patent: April 6, 2021Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Julia Schwarz, Casey Leon Meekhof, Alon Farchy, Sheng Kai Tang, Nicholas F. Kamuda
-
Patent number: 10890967Abstract: A method for improving user interaction with a virtual environment includes presenting the virtual environment to a user on a display, measuring a gaze location of a user's gaze relative to the virtual environment, casting an input ray from an input device, measuring an input ray location at a distal point of the input ray, and snapping a presented ray location to the gaze location when the input ray location is within a snap threshold distance of the input ray location.Type: GrantFiled: July 9, 2018Date of Patent: January 12, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Sophie Stellmach, Sheng Kai Tang, Casey Leon Meekhof, Julia Schwarz, Nahil Tawfik Sharkasi, Thomas Matthew Gable
-
Patent number: 10831265Abstract: A method for improving user interaction with a virtual environment includes measuring a first position of the user's gaze relative to a virtual element, selecting the virtual element in the virtual environment at an origin when the user's gaze overlaps the virtual element, measuring a second position of a user's gaze relative to the virtual element, presenting a visual placeholder at a second position of the user's gaze when the second position of the user's gaze is beyond a threshold distance from the origin, and moving the visual placeholder relative to a destination using a secondary input device.Type: GrantFiled: April 20, 2018Date of Patent: November 10, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Sophie Stellmach, Casey Leon Meekhof, Julia Schwarz
-
Publication number: 20200225758Abstract: A method for augmenting a two-stage hand gesture input comprises receiving hand tracking data for a hand of a user. A gesture recognition machine recognizes that the user has performed a first-stage gesture based on one or more parameters derived from the received hand tracking data satisfying first-stage gesture criteria. An affordance cueing a second-stage gesture is provided to the user responsive to recognizing the first-stage gesture. The gesture recognition machine recognizes that the user has performed the second-stage gesture based on one or more parameters derived from the received hand tracking data satisfying second-stage gesture criteria. A graphical user interface element is displayed responsive to recognizing the second-stage gesture.Type: ApplicationFiled: March 26, 2019Publication date: July 16, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Chuan QIN, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Joshua Kyle NEFF, Jamie Bryant KIRSCHENBAUM, Neil Richard KRONLAGE