Patents by Inventor Sheng-Kai Tang

Sheng-Kai Tang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960790
    Abstract: A computer implemented method includes detecting user interaction with mixed reality displayed content in a mixed reality system. User focus is determined as a function of the user interaction based on the user interaction using a spatial intent model. A length of time for extending voice engagement with the mixed reality system is modified based on the determined user focus. Detecting user interaction with the displayed content may include tracking eye movements to determine objects in the displayed content at which the user is looking and determining a context of a user dialog during the voice engagement.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: April 16, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Austin S. Lee, Jonathan Kyle Palmer, Anthony James Ambrus, Mathew J. Lamb, Sheng Kai Tang, Sophie Stellmach
  • Patent number: 11755122
    Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
    Type: Grant
    Filed: May 23, 2022
    Date of Patent: September 12, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Michael Harley Notter, Jenny Kam, Sheng Kai Tang, Kenneth Mitchell Jakubzak, Adam Edwin Behringer, Amy Mun Hong, Joshua Kyle Neff, Sophie Stellmach, Mathew J. Lamb, Nicholas Ferianc Kamuda
  • Patent number: 11703994
    Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: July 18, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Joshua Kyle Neff, Alton Kwok
  • Patent number: 11656689
    Abstract: A method for single-handed microgesture input comprises receiving hand tracking data for a hand of a user. A set of microgesture targets that include software functions are assigned to positions along a length of a first finger. The received hand tracking data is analyzed by a gesture recognition machine. A location of a thumbtip of the hand of the user is determined relative to the positions along the first finger. Responsive to determining that the thumbtip is within a threshold distance of the first finger at a first position along the length of the first finger, a corresponding first microgesture target is designated for selection. Selection of the first microgesture target is enabled based on a duration the thumbtip is at the first position. Responsive to detecting a confirmation action, the corresponding microgesture target executes.
    Type: Grant
    Filed: January 13, 2022
    Date of Patent: May 23, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Noe Moreno Barragan, Michael Harley Notter, Sheng Kai Tang, Joshua Kyle Neff
  • Patent number: 11630509
    Abstract: This disclosure relates to displaying a user interface for a computing device based upon a user intent determined via a spatial intent model. One example provides a computing device comprising a see-through display, a logic subsystem, and a storage subsystem. The storage subsystem comprises instructions executable by the logic machine to receive, via an eye-tracking sensor, eye tracking samples each corresponding to a gaze direction of a user, based at least on the eye tracking samples, determine a time-dependent attention value for a location in a field of view of the see-through display, based at least on the time-dependent attention value for the location, determine an intent of the user to interact with a user interface associated with the location that is at least partially hidden from a current view, and in response to determining the intent, display via the see-through display the user interface.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: April 18, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Austin S. Lee, Anthony James Ambrus, Sheng Kai Tang, Keiichi Matsuda, Aleksandar Josic
  • Publication number: 20220382510
    Abstract: A computer implemented method includes detecting user interaction with mixed reality displayed content in a mixed reality system. User focus is determined as a function of the user interaction based on the user interaction using a spatial intent model. A length of time for extending voice engagement with the mixed reality system is modified based on the determined user focus. Detecting user interaction with the displayed content may include tracking eye movements to determine objects in the displayed content at which the user is looking and determining a context of a user dialog during the voice engagement.
    Type: Application
    Filed: May 27, 2021
    Publication date: December 1, 2022
    Inventors: Austin S. LEE, Jonathan Kyle PALMER, Anthony James AMBRUS, Mathew J. LAMB, Sheng Kai TANG, Sophie STELLMACH
  • Patent number: 11461955
    Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
    Type: Grant
    Filed: August 23, 2021
    Date of Patent: October 4, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Jason Michael Ray, Sophie Stellmach, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Kevin John Appel, Jamie Bryant Kirschenbaum
  • Publication number: 20220300071
    Abstract: Systems and methods are provided for selectively enabling or disabling control rays in mixed-reality environments. A system presents a mixed-reality environment to a user with a mixed-reality display device, displays a control ray as a hologram of a line extending away from the user control within the mixed-reality environment, and obtains a control ray activation variable associated with a user control. The control ray activation variable includes a velocity or acceleration of the user control. After displaying the control ray within the mixed-reality environment, and in response to determining that, the control ray activation variable exceeds a predetermined threshold, the system selectively disables display of the control ray within the mixed-reality environment.
    Type: Application
    Filed: June 8, 2022
    Publication date: September 22, 2022
    Inventors: Julia SCHWARZ, Sheng Kai TANG, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Sophie STELLMACH
  • Publication number: 20220283646
    Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
    Type: Application
    Filed: May 23, 2022
    Publication date: September 8, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Julia SCHWARZ, Michael Harley NOTTER, Jenny KAM, Sheng Kai TANG, Kenneth Mitchell JAKUBZAK, Adam Edwin BEHRINGER, Amy Mun HONG, Joshua Kyle NEFF, Sophie STELLMACH, Mathew J. LAMB, Nicholas Ferianc KAMUDA
  • Publication number: 20220253199
    Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
    Type: Application
    Filed: April 28, 2022
    Publication date: August 11, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Joshua Kyle NEFF, Alton KWOK
  • Patent number: 11397463
    Abstract: Systems and methods are provided for selectively enabling or disabling control rays in mixed-reality environments. In some instances, a mixed-reality display device presents a mixed-reality environment to a user which includes one or more holograms. The display device then detects a user gesture input associated with a user control (which may include a part of the user's body) during presentation of the mixed-reality environment. In response to detecting the user gesture, the display device selectively generates and displays a corresponding control ray as a hologram rendered by the display device extending away from the user control within the mixed-reality environment. Gestures may also be detected for selectively disabling control rays so that they are no longer rendered.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: July 26, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Sheng Kai Tang, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Sophie Stellmach
  • Publication number: 20220187907
    Abstract: This disclosure relates to displaying a user interface for a computing device based upon a user intent determined via a spatial intent model. One example provides a computing device comprising a see-through display, a logic subsystem, and a storage subsystem. The storage subsystem comprises instructions executable by the logic machine to receive, via an eye-tracking sensor, eye tracking samples each corresponding to a gaze direction of a user, based at least on the eye tracking samples, determine a time-dependent attention value for a location in a field of view of the see-through display, based at least on the time-dependent attention value for the location, determine an intent of the user to interact with a user interface associated with the location that is at least partially hidden from a current view, and in response to determining the intent, display via the see-through display the user interface.
    Type: Application
    Filed: December 11, 2020
    Publication date: June 16, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Austin S. LEE, Anthony James AMBRUS, Sheng Kai TANG, Keiichi MATSUDA, Aleksandar JOSIC
  • Publication number: 20220171469
    Abstract: A method for single-handed microgesture input comprises receiving hand tracking data for a hand of a user. A set of microgesture targets that include software functions are assigned to positions along a length of a first finger. The received hand tracking data is analyzed by a gesture recognition machine. A location of a thumbtip of the hand of the user is determined relative to the positions along the first finger. Responsive to determining that the thumbtip is within a threshold distance of the first finger at a first position along the length of the first finger, a corresponding first microgesture target is designated for selection. Selection of the first microgesture target is enabled based on a duration the thumbtip is at the first position. Responsive to detecting a confirmation action, the corresponding microgesture target executes.
    Type: Application
    Filed: January 13, 2022
    Publication date: June 2, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Julia SCHWARZ, Noe Moreno BARRAGAN, Michael Harley NOTTER, Sheng Kai TANG, Joshua Kyle NEFF
  • Patent number: 11340707
    Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: May 24, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Michael Harley Notter, Jenny Kam, Sheng Kai Tang, Kenneth Mitchell Jakubzak, Adam Edwin Behringer, Amy Mun Hong, Joshua Kyle Neff, Sophie Stellmach, Mathew J. Lamb, Nicholas Ferianc Kamuda
  • Patent number: 11320957
    Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: May 3, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Joshua Kyle Neff, Alton Kwok
  • Patent number: 11294472
    Abstract: A method for augmenting a two-stage hand gesture input comprises receiving hand tracking data for a hand of a user. A gesture recognition machine recognizes that the user has performed a first-stage gesture based on one or more parameters derived from the received hand tracking data satisfying first-stage gesture criteria. An affordance cueing a second-stage gesture is provided to the user responsive to recognizing the first-stage gesture. The gesture recognition machine recognizes that the user has performed the second-stage gesture based on one or more parameters derived from the received hand tracking data satisfying second-stage gesture criteria. A graphical user interface element is displayed responsive to recognizing the second-stage gesture.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: April 5, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Chuan Qin, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Joshua Kyle Neff, Jamie Bryant Kirschenbaum, Neil Richard Kronlage
  • Patent number: 11249556
    Abstract: A method for single-handed microgesture input comprises receiving hand tracking data for a hand of a user. A set of microgesture targets that include software functions are assigned to positions along a length of a first finger. A visual affordance including indicators for two or more assigned microgesture targets is provided to the user. The received hand tracking data is analyzed by a gesture recognition machine. A location of a thumbtip of the hand of the user is determined relative to the positions along the first finger. Responsive to determining that the thumbtip is within a threshold distance of the first finger at a first position along the length of the first finger, an indicator for a corresponding first microgesture target is augmented, and then further augmented based on a duration the thumbtip is at the first position. Responsive to detecting a confirmation action, the corresponding microgesture target executes.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: February 15, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Noe Moreno Barragan, Michael Harley Notter, Sheng Kai Tang, Joshua Kyle Neff
  • Publication number: 20210383594
    Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
    Type: Application
    Filed: August 23, 2021
    Publication date: December 9, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Jason Michael RAY, Sophie STELLMACH, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Kevin John APPEL, Jamie Bryant KIRSCHENBAUM
  • Publication number: 20210373672
    Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
    Type: Application
    Filed: May 29, 2020
    Publication date: December 2, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Julia SCHWARZ, Michael Harley NOTTER, Jenny KAM, Sheng Kai TANG, Kenneth Mitchell JAKUBZAK, Adam Edwin BEHRINGER, Amy Mun HONG, Joshua Kyle NEFF, Sophie STELLMACH, Mathew J. LAMB, Nicholas Ferianc KAMUDA
  • Patent number: 11132845
    Abstract: A method for object recognition includes, at a computing device, receiving an image of a real-world object. An identity of the real-world object is recognized using an object recognition model trained on a plurality of computer-generated training images. A digital augmentation model corresponding to the real-world object is retrieved, the digital augmentation model including a set of augmentation-specific instructions. A pose of the digital augmentation model is aligned with a pose of the real-world object. An augmentation is provided, the augmentation associated with the real-world object and specified by the augmentation-specific instructions.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: September 28, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Harpreet Singh Sawhney, Andrey Konin, Bilha-Catherine W. Githinji, Amol Ashok Ambardekar, William Douglas Guyman, Muhammad Zeeshan Zia, Ning Xu, Sheng Kai Tang, Pedro Urbina Escos