Patents by Inventor Christopher David Twigg

Christopher David Twigg has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11232591
    Abstract: A system generates a user hand shape model from a single depth camera. The system includes the single depth camera and a hand tracking unit. The single depth camera generates single depth image data of a user's hand. The hand tracking unit applies the single depth image data to a neural network model to generate heat maps indicating locations of hand features. The locations of hand features are used to generate a user hand shape model customized to the size and shape of the user's hand. The user hand shape model is defined by a set of principal component hand shapes defining a hand shape variation space. The limited number of principal component hand shape models reduces determination of user hand shape to a smaller number of variables, and thus provides for a fast calibration of the user hand shape model.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: January 25, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Christopher David Twigg, Robert Y. Wang, Yuting Ye
  • Patent number: 10964083
    Abstract: A system includes a computing device that includes a memory configured to store instructions. The system also includes a processor configured to execute the instructions to perform a method that includes receiving multiple representations of one or more expressions of an object. Each of the representations includes position information attained from one or more images of the object. The method also includes producing an animation model from one or more groups of controls that respectively define each of the one or more expressions of the object as provided by the multiple representations. Each control of each group of controls has an adjustable value that defines the geometry of at least one shape of a portion of the respective expression of the object. Producing the animation model includes producing one or more corrective shapes if the animation model is incapable of accurately presenting the one or more expressions of the object as provided by the multiple representations.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: March 30, 2021
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Kiran S. Bhat, Michael Koperwas, Rachel M. Rose, Jung-Seung Hong, Frederic P. Pighin, Christopher David Twigg, Cary Phillips, Steve Sullivan
  • Patent number: 10955932
    Abstract: A head-mounted display (HMD) tracks a user's hand positions, orientations, and gestures using an ultrasound sensor coupled to the HMD. The ultrasound sensor emits ultrasound signals that reflect off the hands of the user, even if a hand of the user is obstructed by the other hand. The ultrasound sensor identifies features used to train a machine learning model based on detecting reflected ultrasound signals. For example, one of the features is the time delay between consecutive reflected ultrasound signals detected by the ultrasound sensor. The machine learning model learns to determine poses and gestures of the user's hands. The HMD optionally includes a camera that generates image data of the user's hands. The image data can also be used to train the machine learning model. The HMD may perform a calibration process to avoid detecting other objects and surfaces such as a wall next to the user.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: March 23, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Elliot Saba, Robert Y. Wang, Christopher David Twigg, Ravish Mehra
  • Patent number: 10803616
    Abstract: A system generates a user hand shape model from a single depth camera. The system includes the single depth camera and a hand tracking unit. The single depth camera generates single depth image data of a user's hand. The hand tracking unit applies the single depth image data to a neural network model to generate heat maps indicating locations of hand features. The locations of hand features are used to generate a user hand shape model customized to the size and shape of the user's hand. The user hand shape model is defined by a set of principle component hand shapes defining a hand shape variation space. The limited number of principle component hand shape models reduces determination of user hand shape to a smaller number of variables, and thus provides for a fast calibration of the user hand shape model.
    Type: Grant
    Filed: April 13, 2017
    Date of Patent: October 13, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Christopher David Twigg, Robert Y. Wang, Yuting Ye
  • Patent number: 10719953
    Abstract: A system tracks poses of a passive object using fiducial markers on fiducial surfaces of a polygonal structure of the object using image data captured by a camera. The system includes an object tracking controller that generates an estimated pose for a frame of the image data using an approximate pose estimation (APE), and then updates the estimated pose using a dense pose refinement (DPR) of pixels. The APE may include minimizing reprojection error between projected image points of the fiducial markers and observed image points of the fiducial markers in the frame. The DPR may include minimizing appearance error between image pixels of the fiducial markers in the frame and projected model pixels of the fiducial markers determined from the estimated pose and the object model. In some embodiments, an inter-frame corner tracking (ICT) of the fiducial markers may be used to facilitate the APE.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: July 21, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Yuting Ye, Robert Y. Wang, Christopher David Twigg, Shangchen Han, Po-Chen Wu
  • Patent number: 10706584
    Abstract: A system tracks a user's hands by processing image data captured using one or more passive cameras. The system includes one or more passive cameras, such as color or monochrome cameras, and a hand tracking unit. The hand tracking unit receives the image data of the user's hand from the one or more passive cameras. The hand tracking unit determines, based on applying the image data to a neural network model, heat maps indicating locations of hand features of a hand shape model. The hand tracking unit may include circuitry that implements the neural network model. The neural network model is trained using image data from passive cameras, depth cameras, or both. The hand tracking unit determines a hand pose of the user's hand based on the locations of the hand features of the hand shape model. The hand pose may be used as a user input, or to render the hand for a display, such as in a head-mounted display.
    Type: Grant
    Filed: May 18, 2018
    Date of Patent: July 7, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Yuting Ye, Robert Y. Wang, Christopher David Twigg, Shangchen Han
  • Patent number: 10657704
    Abstract: A tracking system converts images to a set of points in 3D space. The images are of a wearable item that includes markers, and the set of points include representations of the markers. A view is selected from a plurality of views using the set of points, and the selected view includes one or more representations of the representations. A depth map is generated based on the selected view and the set of points, and the depth map includes the one or more representations. A neural network maps labels to the one or more representations in the depth map using a model of a portion of a body that wears the wearable item. A joint parameter is determined using the mapped labels. The model is updated with the joint parameter, and content provided to a user of the wearable item is based in part on the updated model.
    Type: Grant
    Filed: February 4, 2020
    Date of Patent: May 19, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Shangchen Han, Christopher David Twigg, Robert Y. Wang
  • Patent number: 10593101
    Abstract: A tracking system converts images to a set of points in 3D space. The images are of a wearable item that includes markers, and the set of points include representations of the markers. A view is selected from a plurality of views using the set of points, and the selected view includes one or more representations of the representations. A depth map is generated based on the selected view and the set of points, and the depth map includes the one or more representations. A neural network maps labels to the one or more representations in the depth map using a model of a portion of a body that wears the wearable item. A joint parameter is determined using the mapped labels. The model is updated with the joint parameter, and content provided to a user of the wearable item is based in part on the updated model.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: March 17, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Shangchen Han, Christopher David Twigg, Robert Y. Wang
  • Patent number: 10572024
    Abstract: A head-mounted display (HMD) tracks a user's hand positions, orientations, and gestures using an ultrasound sensor coupled to the HMD. The ultrasound sensor emits ultrasound signals that reflect off the hands of the user, even if a hand of the user is obstructed by the other hand. The ultrasound sensor identifies features used to train a machine learning model based on detecting reflected ultrasound signals. For example, one of the features is the time delay between consecutive reflected ultrasound signals detected by the ultrasound sensor. The machine learning model learns to determine poses and gestures of the user's hands. The HMD optionally includes a camera that generates image data of the user's hands. The image data can also be used to train the machine learning model. The HMD may perform a calibration process to avoid detecting other objects and surfaces such as a wall next to the user.
    Type: Grant
    Filed: August 3, 2017
    Date of Patent: February 25, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Elliot Saba, Robert Y. Wang, Christopher David Twigg, Ravish Mehra
  • Patent number: 10269165
    Abstract: A system includes a computing device that includes a memory and a processor configured to execute instructions to perform a method that includes receiving multiple representations of one or more expressions of an object. Each representation includes position information attained from one or more images of the object. The method also includes producing an animation model from one or more groups of controls that respectively define each of the one or more expressions of the object as provided by the multiple representations. Each control of each group of controls has an adjustable value that defines the geometry of at least one shape of a portion of the respective expression of the object. Producing the animation model includes producing one or more corrective shapes if the animation model is incapable of accurately presenting the one or more expressions of the object as provided by the multiple representations.
    Type: Grant
    Filed: January 30, 2012
    Date of Patent: April 23, 2019
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Kiran S. Bhat, Michael Koperwas, Rachel M. Rose, Jung-Seung Hong, Frederic P. Pighin, Christopher David Twigg, Cary Phillips, Steve Sullivan