Patents by Inventor Robert Y. Wang

Robert Y. Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11899928
    Abstract: Disclosed herein are related to systems and methods for providing inputs through a virtual keyboard with an adaptive language model. In one approach, one or more processors determine whether a user intended to provide semantically meaningful characters or not, when providing a hand motion or a hand pose with respect to a virtual keyboard. The virtual keyboard may be located on a surface without physical keys. In one approach, the one or more processors determine an input to the virtual keyboard based on the hand motion or the hand pose. In one approach, the one or more processors determine weight of a language model according to the determined user intention. In one approach, the one or more processors modify the detected input according to the determined weight of the language model.
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: February 13, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Mark A. Richardson, Robert Y. Wang
  • Publication number: 20220261150
    Abstract: Disclosed herein are related to systems and methods for providing inputs through a virtual keyboard with an adaptive language model. In one approach, one or more processors determine whether a user intended to provide semantically meaningful characters or not, when providing a hand motion or a hand pose with respect to a virtual keyboard. The virtual keyboard may be located on a surface without physical keys. In one approach, the one or more processors determine an input to the virtual keyboard based on the hand motion or the hand pose. In one approach, the one or more processors determine weight of a language model according to the determined user intention. In one approach, the one or more processors modify the detected input according to the determined weight of the language model.
    Type: Application
    Filed: May 9, 2022
    Publication date: August 18, 2022
    Inventors: Mark A. Richardson, Robert Y. Wang
  • Patent number: 11327651
    Abstract: Disclosed herein are related to systems and methods for providing inputs through a virtual keyboard with an adaptive language model. In one approach, one or more processors determine whether a user intended to provide semantically meaningful characters or not, when providing a hand motion or a hand pose with respect to a virtual keyboard. The virtual keyboard may be located on a surface without physical keys. In one approach, the one or more processors determine an input to the virtual keyboard based on the hand motion or the hand pose. In one approach, the one or more processors determine weight of a language model according to the determined user intention. In one approach, the one or more processors modify the detected input according to the determined weight of the language model.
    Type: Grant
    Filed: February 12, 2020
    Date of Patent: May 10, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Mark A. Richardson, Robert Y. Wang
  • Patent number: 11232591
    Abstract: A system generates a user hand shape model from a single depth camera. The system includes the single depth camera and a hand tracking unit. The single depth camera generates single depth image data of a user's hand. The hand tracking unit applies the single depth image data to a neural network model to generate heat maps indicating locations of hand features. The locations of hand features are used to generate a user hand shape model customized to the size and shape of the user's hand. The user hand shape model is defined by a set of principal component hand shapes defining a hand shape variation space. The limited number of principal component hand shape models reduces determination of user hand shape to a smaller number of variables, and thus provides for a fast calibration of the user hand shape model.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: January 25, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Christopher David Twigg, Robert Y. Wang, Yuting Ye
  • Publication number: 20210247900
    Abstract: Disclosed herein are related to systems and methods for providing inputs through a virtual keyboard with an adaptive language model. In one approach, one or more processors determine whether a user intended to provide semantically meaningful characters or not, when providing a hand motion or a hand pose with respect to a virtual keyboard. The virtual keyboard may be located on a surface without physical keys. In one approach, the one or more processors determine an input to the virtual keyboard based on the hand motion or the hand pose. In one approach, the one or more processors determine weight of a language model according to the determined user intention. In one approach, the one or more processors modify the detected input according to the determined weight of the language model.
    Type: Application
    Filed: February 12, 2020
    Publication date: August 12, 2021
    Inventors: Mark A. Richardson, Robert Y. Wang
  • Patent number: 10955932
    Abstract: A head-mounted display (HMD) tracks a user's hand positions, orientations, and gestures using an ultrasound sensor coupled to the HMD. The ultrasound sensor emits ultrasound signals that reflect off the hands of the user, even if a hand of the user is obstructed by the other hand. The ultrasound sensor identifies features used to train a machine learning model based on detecting reflected ultrasound signals. For example, one of the features is the time delay between consecutive reflected ultrasound signals detected by the ultrasound sensor. The machine learning model learns to determine poses and gestures of the user's hands. The HMD optionally includes a camera that generates image data of the user's hands. The image data can also be used to train the machine learning model. The HMD may perform a calibration process to avoid detecting other objects and surfaces such as a wall next to the user.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: March 23, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Elliot Saba, Robert Y. Wang, Christopher David Twigg, Ravish Mehra
  • Patent number: 10803616
    Abstract: A system generates a user hand shape model from a single depth camera. The system includes the single depth camera and a hand tracking unit. The single depth camera generates single depth image data of a user's hand. The hand tracking unit applies the single depth image data to a neural network model to generate heat maps indicating locations of hand features. The locations of hand features are used to generate a user hand shape model customized to the size and shape of the user's hand. The user hand shape model is defined by a set of principle component hand shapes defining a hand shape variation space. The limited number of principle component hand shape models reduces determination of user hand shape to a smaller number of variables, and thus provides for a fast calibration of the user hand shape model.
    Type: Grant
    Filed: April 13, 2017
    Date of Patent: October 13, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Christopher David Twigg, Robert Y. Wang, Yuting Ye
  • Patent number: 10719953
    Abstract: A system tracks poses of a passive object using fiducial markers on fiducial surfaces of a polygonal structure of the object using image data captured by a camera. The system includes an object tracking controller that generates an estimated pose for a frame of the image data using an approximate pose estimation (APE), and then updates the estimated pose using a dense pose refinement (DPR) of pixels. The APE may include minimizing reprojection error between projected image points of the fiducial markers and observed image points of the fiducial markers in the frame. The DPR may include minimizing appearance error between image pixels of the fiducial markers in the frame and projected model pixels of the fiducial markers determined from the estimated pose and the object model. In some embodiments, an inter-frame corner tracking (ICT) of the fiducial markers may be used to facilitate the APE.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: July 21, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Yuting Ye, Robert Y. Wang, Christopher David Twigg, Shangchen Han, Po-Chen Wu
  • Patent number: 10719173
    Abstract: A transcription engine transcribes input received from an augmented reality keyboard based on a sequence of hand poses performed by a user when typing. A hand pose generator analyzes video of the user typing to generate the sequence of hand poses. The transcription engine implements a set of transcription models to generate a series of keystrokes based on the sequence of hand poses. Each keystroke in the series may correspond to one or more hand poses in the sequence of hand poses. The transcription engine monitors the behavior of the user and selects between transcription models depending on the attention level of the user. The transcription engine may select a first transcription model when the user types in a focused manner and then select a second transcription model when the user types in a less focused, conversational manner.
    Type: Grant
    Filed: April 4, 2018
    Date of Patent: July 21, 2020
    Assignee: FACEBOOK TECHNOLOGIES, LLC
    Inventors: Mark A Richardson, Robert Y Wang
  • Patent number: 10706584
    Abstract: A system tracks a user's hands by processing image data captured using one or more passive cameras. The system includes one or more passive cameras, such as color or monochrome cameras, and a hand tracking unit. The hand tracking unit receives the image data of the user's hand from the one or more passive cameras. The hand tracking unit determines, based on applying the image data to a neural network model, heat maps indicating locations of hand features of a hand shape model. The hand tracking unit may include circuitry that implements the neural network model. The neural network model is trained using image data from passive cameras, depth cameras, or both. The hand tracking unit determines a hand pose of the user's hand based on the locations of the hand features of the hand shape model. The hand pose may be used as a user input, or to render the hand for a display, such as in a head-mounted display.
    Type: Grant
    Filed: May 18, 2018
    Date of Patent: July 7, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Yuting Ye, Robert Y. Wang, Christopher David Twigg, Shangchen Han
  • Patent number: 10657704
    Abstract: A tracking system converts images to a set of points in 3D space. The images are of a wearable item that includes markers, and the set of points include representations of the markers. A view is selected from a plurality of views using the set of points, and the selected view includes one or more representations of the representations. A depth map is generated based on the selected view and the set of points, and the depth map includes the one or more representations. A neural network maps labels to the one or more representations in the depth map using a model of a portion of a body that wears the wearable item. A joint parameter is determined using the mapped labels. The model is updated with the joint parameter, and content provided to a user of the wearable item is based in part on the updated model.
    Type: Grant
    Filed: February 4, 2020
    Date of Patent: May 19, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Shangchen Han, Christopher David Twigg, Robert Y. Wang
  • Patent number: 10593101
    Abstract: A tracking system converts images to a set of points in 3D space. The images are of a wearable item that includes markers, and the set of points include representations of the markers. A view is selected from a plurality of views using the set of points, and the selected view includes one or more representations of the representations. A depth map is generated based on the selected view and the set of points, and the depth map includes the one or more representations. A neural network maps labels to the one or more representations in the depth map using a model of a portion of a body that wears the wearable item. A joint parameter is determined using the mapped labels. The model is updated with the joint parameter, and content provided to a user of the wearable item is based in part on the updated model.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: March 17, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Shangchen Han, Christopher David Twigg, Robert Y. Wang
  • Patent number: 10572024
    Abstract: A head-mounted display (HMD) tracks a user's hand positions, orientations, and gestures using an ultrasound sensor coupled to the HMD. The ultrasound sensor emits ultrasound signals that reflect off the hands of the user, even if a hand of the user is obstructed by the other hand. The ultrasound sensor identifies features used to train a machine learning model based on detecting reflected ultrasound signals. For example, one of the features is the time delay between consecutive reflected ultrasound signals detected by the ultrasound sensor. The machine learning model learns to determine poses and gestures of the user's hands. The HMD optionally includes a camera that generates image data of the user's hands. The image data can also be used to train the machine learning model. The HMD may perform a calibration process to avoid detecting other objects and surfaces such as a wall next to the user.
    Type: Grant
    Filed: August 3, 2017
    Date of Patent: February 25, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Elliot Saba, Robert Y. Wang, Christopher David Twigg, Ravish Mehra
  • Patent number: 10481699
    Abstract: A system includes a wearable device including sensors arranged at different locations on the wearable device. Each sensor measures electrical signals transmitted from a wrist or arm of a user. A position computation circuit is coupled to the sensors. The position computation circuit computes, using information derived from the electrical signals with a machine learning model, an output that describes a hand position of a hand of the wrist or arm of the user.
    Type: Grant
    Filed: July 27, 2017
    Date of Patent: November 19, 2019
    Assignee: Facebook Technologies, LLC
    Inventors: Beipeng Mu, Renzo De Nardi, Richard Andrew Newcombe, Raymond King, Evan Paul Gander, Robert Y. Wang
  • Publication number: 20190310688
    Abstract: A transcription engine transcribes input received from an augmented reality keyboard based on a sequence of hand poses performed by a user when typing. A hand pose generator analyzes video of the user typing to generate the sequence of hand poses. The transcription engine implements a set of transcription models to generate a series of keystrokes based on the sequence of hand poses. Each keystroke in the series may correspond to one or more hand poses in the sequence of hand poses. The transcription engine monitors the behavior of the user and selects between transcription models depending on the attention level of the user. The transcription engine may select a first transcription model when the user types in a focused manner and then select a second transcription model when the user types in a less focused, conversational manner.
    Type: Application
    Filed: April 4, 2018
    Publication date: October 10, 2019
    Inventors: Mark A. Richardson, Robert Y. Wang
  • Publication number: 20190033974
    Abstract: A system includes a wearable device including sensors arranged at different locations on the wearable device. Each sensor measures electrical signals transmitted from a wrist or arm of a user. A position computation circuit is coupled to the sensors. The position computation circuit computes, using information derived from the electrical signals with a machine learning model, an output that describes a hand position of a hand of the wrist or arm of the user.
    Type: Application
    Filed: July 27, 2017
    Publication date: January 31, 2019
    Inventors: Beipeng Mu, Renzo De Nardi, Richard Andrew Newcombe, Raymond King, Evan Paul Gander, Robert Y. Wang
  • Patent number: 7516123
    Abstract: Semantically linked pages are queried based on a user supplied interest vector. The interest vector provides a weight for presenting results from the query of pages. The interest vectors are used to calculate the importance of pages of a query based on the weight of semantic links associated with the page known as page rank indicators. Optionally, the calculation is augmented by other page ranking algorithms. An indication of the resulting pages is displayed according to the calculated importance. Preferably the calculation utilizes a dot product of page rank and user interest vectors.
    Type: Grant
    Filed: April 14, 2005
    Date of Patent: April 7, 2009
    Assignee: International Business Machines Corporation
    Inventors: Joseph P. Betz, Sean J. Martin, Yan Pritzker, Benjamin H. Szekely, Robert Y. Wang