Patents by Inventor Sheng-Kai Tang

Sheng-Kai Tang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11107265
    Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: August 31, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Jason Michael Ray, Sophie Stellmach, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Kevin John Appel, Jamie Bryant Kirschenbaum
  • Patent number: 10969937
    Abstract: Systems and methods are provided for controlling the position of an interactive movable menu in a mixed-reality environment. In some instances, a mixed-reality display device presents a mixed-reality environment to a user. The mixed-reality device then detects a first gesture associated with a user controller while presenting the mixed-reality environment and, in response to the first gesture, triggers a display of an interactive movable menu within the mixed-reality environment as a tethered hologram that is dynamically moved within the mixed-reality environment relative to and corresponding with movement of the user controller within the mixed-reality environment. Then, in response to a second detected gesture, the mixed-reality device selectively locks a display of the interactive movable menu at a fixed position that is not tethered to the user controller.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: April 6, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Julia Schwarz, Casey Leon Meekhof, Alon Farchy, Sheng Kai Tang, Nicholas F. Kamuda
  • Patent number: 10890967
    Abstract: A method for improving user interaction with a virtual environment includes presenting the virtual environment to a user on a display, measuring a gaze location of a user's gaze relative to the virtual environment, casting an input ray from an input device, measuring an input ray location at a distal point of the input ray, and snapping a presented ray location to the gaze location when the input ray location is within a snap threshold distance of the input ray location.
    Type: Grant
    Filed: July 9, 2018
    Date of Patent: January 12, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sophie Stellmach, Sheng Kai Tang, Casey Leon Meekhof, Julia Schwarz, Nahil Tawfik Sharkasi, Thomas Matthew Gable
  • Publication number: 20200372715
    Abstract: A method for object recognition includes, at a computing device, receiving an image of a real-world object. An identity of the real-world object is recognized using an object recognition model trained on a plurality of computer-generated training images. A digital augmentation model corresponding to the real-world object is retrieved, the digital augmentation model including a set of augmentation-specific instructions. A pose of the digital augmentation model is aligned with a pose of the real-world object. An augmentation is provided, the augmentation associated with the real-world object and specified by the augmentation-specific instructions.
    Type: Application
    Filed: May 22, 2019
    Publication date: November 26, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Harpreet Singh SAWHNEY, Andrey KONIN, Bilha-Catherine W. GITHINJI, Amol Ashok AMBARDEKAR, William Douglas GUYMAN, Muhammad Zeeshan ZIA, Ning XU, Sheng Kai TANG, Pedro URBINA ESCOS
  • Publication number: 20200225736
    Abstract: Systems and methods are provided for selectively enabling or disabling control rays in mixed-reality environments. In some instances, a mixed-reality display device presents a mixed-reality environment to a user which includes one or more holograms. The display device then detects a user gesture input associated with a user control (which may include a part of the user's body) during presentation of the mixed-reality environment. In response to detecting the user gesture, the display device selectively generates and displays a corresponding control ray as a hologram rendered by the display device extending away from the user control within the mixed-reality environment. Gestures may also be detected for selectively disabling control rays so that they are no longer rendered.
    Type: Application
    Filed: March 8, 2019
    Publication date: July 16, 2020
    Inventors: Julia Schwarz, Sheng Kai Tang, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Sophie Stellmach
  • Publication number: 20200225758
    Abstract: A method for augmenting a two-stage hand gesture input comprises receiving hand tracking data for a hand of a user. A gesture recognition machine recognizes that the user has performed a first-stage gesture based on one or more parameters derived from the received hand tracking data satisfying first-stage gesture criteria. An affordance cueing a second-stage gesture is provided to the user responsive to recognizing the first-stage gesture. The gesture recognition machine recognizes that the user has performed the second-stage gesture based on one or more parameters derived from the received hand tracking data satisfying second-stage gesture criteria. A graphical user interface element is displayed responsive to recognizing the second-stage gesture.
    Type: Application
    Filed: March 26, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Chuan QIN, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Joshua Kyle NEFF, Jamie Bryant KIRSCHENBAUM, Neil Richard KRONLAGE
  • Publication number: 20200226814
    Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Jason Michael RAY, Sophie STELLMACH, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Kevin John APPEL, Jamie Bryant KIRSCHENBAUM
  • Publication number: 20200225830
    Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
    Type: Application
    Filed: March 25, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Joshua Kyle NEFF, Alton KWOK
  • Publication number: 20200225813
    Abstract: Systems and methods are provided for controlling the position of an interactive movable menu in a mixed-reality environment. In some instances, a mixed-reality display device presents a mixed-reality environment to a user. The mixed-reality device then detects a first gesture associated with a user controller while presenting the mixed-reality environment and, in response to the first gesture, triggers a display of an interactive movable menu within the mixed-reality environment as a tethered hologram that is dynamically moved within the mixed-reality environment relative to and corresponding with movement of the user controller within the mixed-reality environment. Then, in response to a second detected gesture, the mixed-reality device selectively locks a display of the interactive movable menu at a fixed position that is not tethered to the user controller.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 16, 2020
    Inventors: Julia Schwarz, Casey Leon Meekhof, Alon Farchy, Sheng Kai Tang, Nicholas F. Kamuda
  • Patent number: 10542052
    Abstract: A first media device, method, and non-transitory computer readable medium for multi-area grouping of devices. The first media device includes a transceiver and a processor coupled to the transceiver. The processor detects a second media device that is not subscribed to the media group. The processor assigns the detected second media device to the media group. The processor determines a total amount of media devices in the media group. The processor recommends a multi-channel configuration for the media group based on the determined total amount of media devices.
    Type: Grant
    Filed: September 8, 2017
    Date of Patent: January 21, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: David H. Law, Scott R. Ysebert, Sheng Kai Tang
  • Publication number: 20200012341
    Abstract: A method for improving user interaction with a virtual environment includes presenting the virtual environment to a user on a display, measuring a gaze location of a user's gaze relative to the virtual environment, casting an input ray from an input device, measuring an input ray location at a distal point of the input ray, and snapping a presented ray location to the gaze location when the input ray location is within a snap threshold distance of the input ray location.
    Type: Application
    Filed: July 9, 2018
    Publication date: January 9, 2020
    Inventors: Sophie STELLMACH, Sheng Kai TANG, Casey Leon MEEKHOF, Julia SCHWARZ, Nahil Tawfik SHARKASI, Thomas Matthew GABLE
  • Publication number: 20180316731
    Abstract: A first media device, method, and non-transitory computer readable medium for multi-area grouping of devices. The first media device includes a transceiver and a processor coupled to the transceiver. The processor detects a second media device that is not subscribed to the media group. The processor assigns the detected second media device to the media group. The processor determines a total amount of media devices in the media group. The processor recommends a multi-channel configuration for the media group based on the determined total amount of media devices.
    Type: Application
    Filed: September 8, 2017
    Publication date: November 1, 2018
    Inventors: David H. Law, Scott R. Ysebert, Sheng Kai Tang
  • Patent number: 8698748
    Abstract: An adaptive mouse is disclosed. In the adaptive mouse, a cover layer made of a moldable material covers a mouse body, and a plurality of sensors is disposed between the mouse body and the cover layer. The sensors are used to sense a hand shape of a user when the user holds the cover layer. The sensors under the left and right finger predicting areas are defined as a left button and a right buttons to allow the user to operate the mouse normally. Then, the displacement signal of the mouse is adjusted. The adaptive mouse may increase comfortableness and relieve fatigue, and it also may be adapted to any holding states without orientation limitation.
    Type: Grant
    Filed: May 4, 2010
    Date of Patent: April 15, 2014
    Assignee: ASUSTeK Computer Inc.
    Inventor: Sheng-Kai Tang
  • Publication number: 20120068946
    Abstract: A touch display device includes a main body, a detecting module and a touch display module. The detecting module includes a plurality of sensing elements for sensing a touch position of a user at the main body and providing a touch signal to the detecting module. The touch display module is disposed at the main body and electrically connected to the detecting module. The touch display device generates a function control zone corresponding to the touch position in accordance with the touch signal. A control method of a touch display device is disclosed as well. The touch display device and the control method can generate the function control zone corresponding to the touch position by sensing the touch of the user to replace physical hot keys.
    Type: Application
    Filed: September 14, 2011
    Publication date: March 22, 2012
    Inventors: Sheng-Kai Tang, Kuo-Chung Chiu, Sheng-Ta Lin, Wen-Chieh Tseng
  • Publication number: 20110134032
    Abstract: A method for controlling a touch control module and an electronic device are provides. The electronic device includes a display and a host. The host includes a sensing module, a touch control module and a control unit. The sensing module includes a sensing unit for detecting the gesture and generating the corresponding sensing signal. The control unit determines whether the gesture complies with a preset condition according to the sensing signal. When the determining result is yes, the control unit controls the touch control module to enter the first control mode. On the contrary, if the determining result is no, the control unit controls the touch control module to enter the second control mode.
    Type: Application
    Filed: December 8, 2010
    Publication date: June 9, 2011
    Inventors: Kuo-Chung CHIU, Wei-Wen Luo, Wen-Chieh Tseng, Sheng-Kai Tang
  • Publication number: 20100295787
    Abstract: An adaptive mouse is disclosed. In the adaptive mouse, a cover layer made of a moldable material covers a mouse body, and a plurality of sensors is disposed between the mouse body and the cover layer. The sensors are used to sense a hand shape of a user when the user holds the cover layer. The sensors under the left and right finger predicting areas are defined as a left button and a right buttons to allow the user to operate the mouse normally. Then, the displacement signal of the mouse is adjusted. The adaptive mouse may increase comfortableness and relieve fatigue, and it also may be adapted to any holding states without orientation limitation.
    Type: Application
    Filed: May 4, 2010
    Publication date: November 25, 2010
    Inventor: Sheng-Kai Tang