Patents by Inventor Casey Leon Meekhof

Casey Leon Meekhof has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230418390
    Abstract: A method for evaluating gesture input comprises receiving input data for sequential data frames, including hand tracking data for hands of a user. A first neural network is trained to recognize features indicative of subsequent gesture interactions and configured to evaluate input data for a sequence of data frames and to output an indication of a likelihood of the user performing gesture interactions during a predetermined window of data frames. A second neural network is trained to recognize features indicative of whether the user is currently performing one or more gesture interactions and configured to adjust parameters for gesture interaction recognition during the predetermined window based on the indicated likelihood. The second neural network evaluates the predetermined window for performed gesture interactions based on the adjusted parameters, and outputs a signal as to whether the user is performing one or more gesture interactions during the predetermined window.
    Type: Application
    Filed: September 8, 2023
    Publication date: December 28, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Julia SCHWARZ, Bugra TEKIN, Sophie STELLMACH, Erian VAZQUEZ, Casey Leon MEEKHOF, Fabian GOBEL
  • Patent number: 11768544
    Abstract: A method for evaluating gesture input comprises receiving input data for sequential data frames, including hand tracking data for hands of a user. A first neural network is trained to recognize features indicative of subsequent gesture interactions and configured to evaluate input data for a sequence of data frames and to output an indication of a likelihood of the user performing gesture interactions during a predetermined window of data frames. A second neural network is trained to recognize features indicative of whether the user is currently performing one or more gesture interactions and configured to adjust parameters for gesture interaction recognition during the predetermined window based on the indicated likelihood. The second neural network evaluates the predetermined window for performed gesture interactions based on the adjusted parameters, and outputs a signal as to whether the user is performing one or more gesture interactions during the predetermined window.
    Type: Grant
    Filed: February 1, 2022
    Date of Patent: September 26, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Bugra Tekin, Sophie Stellmach, Erian Vazquez, Casey Leon Meekhof, Fabian Gobel
  • Publication number: 20230244316
    Abstract: A method for evaluating gesture input comprises receiving input data for sequential data frames, including hand tracking data for hands of a user. A first neural network is trained to recognize features indicative of subsequent gesture interactions and configured to evaluate input data for a sequence of data frames and to output an indication of a likelihood of the user performing gesture interactions during a predetermined window of data frames. A second neural network is trained to recognize features indicative of whether the user is currently performing one or more gesture interactions and configured to adjust parameters for gesture interaction recognition during the predetermined window based on the indicated likelihood. The second neural network evaluates the predetermined window for performed gesture interactions based on the adjusted parameters, and outputs a signal as to whether the user is performing one or more gesture interactions during the predetermined window.
    Type: Application
    Filed: February 1, 2022
    Publication date: August 3, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Julia SCHWARZ, Bugra TEKIN, Sophie STELLMACH, Erian VAZQUEZ, Casey Leon MEEKHOF, Fabian GOBEL
  • Patent number: 11703994
    Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: July 18, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Joshua Kyle Neff, Alton Kwok
  • Patent number: 11461955
    Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
    Type: Grant
    Filed: August 23, 2021
    Date of Patent: October 4, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Jason Michael Ray, Sophie Stellmach, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Kevin John Appel, Jamie Bryant Kirschenbaum
  • Publication number: 20220300071
    Abstract: Systems and methods are provided for selectively enabling or disabling control rays in mixed-reality environments. A system presents a mixed-reality environment to a user with a mixed-reality display device, displays a control ray as a hologram of a line extending away from the user control within the mixed-reality environment, and obtains a control ray activation variable associated with a user control. The control ray activation variable includes a velocity or acceleration of the user control. After displaying the control ray within the mixed-reality environment, and in response to determining that, the control ray activation variable exceeds a predetermined threshold, the system selectively disables display of the control ray within the mixed-reality environment.
    Type: Application
    Filed: June 8, 2022
    Publication date: September 22, 2022
    Inventors: Julia SCHWARZ, Sheng Kai TANG, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Sophie STELLMACH
  • Patent number: 11423585
    Abstract: Examples that relate to virtual controls in a mixed reality experience are described. One example provides a method comprising, via a mixed reality display device, displaying mixed reality content including a representation of a virtual control, and receiving sensor data indicating motion of a user digit. The method further comprises, based at least in part on the sensor data, determining a velocity of the user digit, and responsive to determining that the velocity of the user digit relative to a surface corresponding to the virtual control satisfies a velocity-based selection condition, triggering the virtual control.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: August 23, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Casey Leon Meekhof, Kyle Nicholas San, Julia Schwarz
  • Publication number: 20220253199
    Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
    Type: Application
    Filed: April 28, 2022
    Publication date: August 11, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Joshua Kyle NEFF, Alton KWOK
  • Patent number: 11397463
    Abstract: Systems and methods are provided for selectively enabling or disabling control rays in mixed-reality environments. In some instances, a mixed-reality display device presents a mixed-reality environment to a user which includes one or more holograms. The display device then detects a user gesture input associated with a user control (which may include a part of the user's body) during presentation of the mixed-reality environment. In response to detecting the user gesture, the display device selectively generates and displays a corresponding control ray as a hologram rendered by the display device extending away from the user control within the mixed-reality environment. Gestures may also be detected for selectively disabling control rays so that they are no longer rendered.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: July 26, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Sheng Kai Tang, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Sophie Stellmach
  • Publication number: 20220172405
    Abstract: Examples that relate to virtual controls in a mixed reality experience are described. One example provides a method comprising, via a mixed reality display device, displaying mixed reality content including a representation of a virtual control, and receiving sensor data indicating motion of a user digit. The method further comprises, based at least in part on the sensor data, determining a velocity of the user digit, and responsive to determining that the velocity of the user digit relative to a surface corresponding to the virtual control satisfies a velocity-based selection condition, triggering the virtual control.
    Type: Application
    Filed: November 30, 2020
    Publication date: June 2, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Casey Leon MEEKHOF, Kyle Nicholas SAN, Julia SCHWARZ
  • Patent number: 11320911
    Abstract: Systems and methods are provided for detecting user-object interaction in mixed-reality environments. A mixed-reality system detects a controller gesture with an associated controller orientation in the mixed-reality environment. The mixed-reality system then determines an interaction region for the controller gesture and identifies one or more virtual objects within the interaction region. The virtual objects each have an associated orientation affinity. Subsequently, the mixed-reality system determines an orientation similarity score between the controller orientation and the orientation affinity for each virtual object within the interaction region. In response to determining that at least one orientation similarity score exceeds a predetermined threshold, the mixed-reality system executes an interaction between the controller and the virtual object that has the greatest orientation similarity score.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: May 3, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Jason Michael Ray, Casey Leon Meekhof
  • Patent number: 11320957
    Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: May 3, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Joshua Kyle Neff, Alton Kwok
  • Publication number: 20220121281
    Abstract: A method for improving user interaction with a virtual environment includes measuring a first position of a user's gaze relative to the virtual environment, receiving a system engagement input, presenting a guidance cursor at the first position, receiving a target engagement input and decoupling the guidance cursor from the user's gaze, receiving a movement input, and translating the guidance cursor based on the movement input.
    Type: Application
    Filed: December 30, 2021
    Publication date: April 21, 2022
    Inventors: Sophie STELLMACH, Casey Leon MEEKHOF, James R. TICHENOR, David Bruce LINDSAY
  • Patent number: 11294472
    Abstract: A method for augmenting a two-stage hand gesture input comprises receiving hand tracking data for a hand of a user. A gesture recognition machine recognizes that the user has performed a first-stage gesture based on one or more parameters derived from the received hand tracking data satisfying first-stage gesture criteria. An affordance cueing a second-stage gesture is provided to the user responsive to recognizing the first-stage gesture. The gesture recognition machine recognizes that the user has performed the second-stage gesture based on one or more parameters derived from the received hand tracking data satisfying second-stage gesture criteria. A graphical user interface element is displayed responsive to recognizing the second-stage gesture.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: April 5, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Chuan Qin, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Joshua Kyle Neff, Jamie Bryant Kirschenbaum, Neil Richard Kronlage
  • Publication number: 20210383594
    Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
    Type: Application
    Filed: August 23, 2021
    Publication date: December 9, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Jason Michael RAY, Sophie STELLMACH, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Kevin John APPEL, Jamie Bryant KIRSCHENBAUM
  • Patent number: 11107265
    Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: August 31, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Jason Michael Ray, Sophie Stellmach, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Kevin John Appel, Jamie Bryant Kirschenbaum
  • Patent number: 10969937
    Abstract: Systems and methods are provided for controlling the position of an interactive movable menu in a mixed-reality environment. In some instances, a mixed-reality display device presents a mixed-reality environment to a user. The mixed-reality device then detects a first gesture associated with a user controller while presenting the mixed-reality environment and, in response to the first gesture, triggers a display of an interactive movable menu within the mixed-reality environment as a tethered hologram that is dynamically moved within the mixed-reality environment relative to and corresponding with movement of the user controller within the mixed-reality environment. Then, in response to a second detected gesture, the mixed-reality device selectively locks a display of the interactive movable menu at a fixed position that is not tethered to the user controller.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: April 6, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Julia Schwarz, Casey Leon Meekhof, Alon Farchy, Sheng Kai Tang, Nicholas F. Kamuda
  • Patent number: 10890967
    Abstract: A method for improving user interaction with a virtual environment includes presenting the virtual environment to a user on a display, measuring a gaze location of a user's gaze relative to the virtual environment, casting an input ray from an input device, measuring an input ray location at a distal point of the input ray, and snapping a presented ray location to the gaze location when the input ray location is within a snap threshold distance of the input ray location.
    Type: Grant
    Filed: July 9, 2018
    Date of Patent: January 12, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sophie Stellmach, Sheng Kai Tang, Casey Leon Meekhof, Julia Schwarz, Nahil Tawfik Sharkasi, Thomas Matthew Gable
  • Patent number: 10831265
    Abstract: A method for improving user interaction with a virtual environment includes measuring a first position of the user's gaze relative to a virtual element, selecting the virtual element in the virtual environment at an origin when the user's gaze overlaps the virtual element, measuring a second position of a user's gaze relative to the virtual element, presenting a visual placeholder at a second position of the user's gaze when the second position of the user's gaze is beyond a threshold distance from the origin, and moving the visual placeholder relative to a destination using a secondary input device.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: November 10, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sophie Stellmach, Casey Leon Meekhof, Julia Schwarz
  • Publication number: 20200225736
    Abstract: Systems and methods are provided for selectively enabling or disabling control rays in mixed-reality environments. In some instances, a mixed-reality display device presents a mixed-reality environment to a user which includes one or more holograms. The display device then detects a user gesture input associated with a user control (which may include a part of the user's body) during presentation of the mixed-reality environment. In response to detecting the user gesture, the display device selectively generates and displays a corresponding control ray as a hologram rendered by the display device extending away from the user control within the mixed-reality environment. Gestures may also be detected for selectively disabling control rays so that they are no longer rendered.
    Type: Application
    Filed: March 8, 2019
    Publication date: July 16, 2020
    Inventors: Julia Schwarz, Sheng Kai Tang, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Sophie Stellmach