Patents by Inventor Erroll William WOOD

Erroll William WOOD has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12062140
    Abstract: Computing an image depicting a face having an expression with wrinkles is described. A 3D polygon mesh model of a face has a non-neutral expression. A tension map is computed from the 3D polygon mesh model. A neutral texture, a compressed wrinkle texture and an expanded wrinkle texture are computed or obtained from a library. The neutral texture comprises a map of the first face with a neutral expression. The compressed wrinkle texture is a map of the first face formed by aggregating maps of the first face with different expressions using the tension map, and the expanded wrinkle texture comprises a map of the first face formed by aggregating maps of the first face with different expressions using the tension map. A graphics engine may be used to apply the wrinkle textures to the 3D model according to the tension map; and render the image from the 3D model.
    Type: Grant
    Filed: September 1, 2022
    Date of Patent: August 13, 2024
    Assignee: Microsoft Technology Licensing, LLC.
    Inventors: Tadas Baltrusaitis, Charles Thomas Hewitt, Erroll William Wood, Chirag Anantha Raman
  • Patent number: 11954801
    Abstract: A method for virtually representing human body poses includes receiving positioning data detailing parameters of one or more body parts of a human user based at least in part on input from one or more sensors. One or more mapping constraints are maintained that relate a model articulated representation to a target articulated representation. A model pose of the model articulated representation and a target pose of the target articulated representation are concurrently estimated based at least in part on the positioning data and the one or more mapping constraints. The previously-trained pose optimization machine is trained with training positioning data having ground truth labels for the model articulated representation. The target articulated representation is output for display with the target pose as a virtual representation of the human user.
    Type: Grant
    Filed: April 11, 2022
    Date of Patent: April 9, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Thomas Joseph Cashman, Erroll William Wood, Federica Bogo, Sasa Galic, Pashmina Jonathan Cameron
  • Publication number: 20240078755
    Abstract: Computing an image depicting a face having an expression with wrinkles is described. A 3D polygon mesh model of a face has a non-neutral expression. A tension map is computed from the 3D polygon mesh model. A neutral texture, a compressed wrinkle texture and an expanded wrinkle texture are computed or obtained from a library. The neutral texture comprises a map of the first face with a neutral expression. The compressed wrinkle texture is a map of the first face formed by aggregating maps of the first face with different expressions using the tension map, and the expanded wrinkle texture comprises a map of the first face formed by aggregating maps of the first face with different expressions using the tension map. A graphics engine may be used to apply the wrinkle textures to the 3D model according to the tension map; and render the image from the 3D model.
    Type: Application
    Filed: September 1, 2022
    Publication date: March 7, 2024
    Inventors: Tadas BALTRUSAITIS, Charles Thomas HEWITT, Erroll William WOOD, Chirag Anantha RAMAN
  • Publication number: 20230419581
    Abstract: Systems and methods are provided that are directed to generating video sequences including physio-realistic avatars. In examples, an albedo for an avatar is received, a sub-surface skin color associated with the albedo is modified based on physiological data associated with physiologic characteristic, and an avatar based on the albedo and the modified sub-surface skin color is rendered. The rendered avatar may then be synthesized in a frame of video. In some examples, a video including the synthesized avatar may be used to train a machine learning model to detect a physiological characteristic. The machine learning model may receive a plurality of video segments, where one or more of the video segments includes a synthetic physio-realistic avatar generated with the physiological characteristic. The machine learning model may be trained using the plurality of video segments. The trained model may be provided to a requesting entity.
    Type: Application
    Filed: September 11, 2023
    Publication date: December 28, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. MCDUFF, Javier HERNANDEZ RIVERA, Tadas BALTRUSAITIS, Erroll William WOOD
  • Patent number: 11790586
    Abstract: Systems and methods are provided that are directed to generating video sequences including physio-realistic avatars. In examples, an albedo for an avatar is received, a sub-surface skin color associated with the albedo is modified based on physiological data associated with physiologic characteristic, and an avatar based on the albedo and the modified sub-surface skin color is rendered. The rendered avatar may then be synthesized in a frame of video. In some examples, a video including the synthesized avatar may be used to train a machine learning model to detect a physiological characteristic. The machine learning model may receive a plurality of video segments, where one or more of the video segments includes a synthetic physio-realistic avatar generated with the physiological characteristic. The machine learning model may be trained using the plurality of video segments. The trained model may be provided to a requesting entity.
    Type: Grant
    Filed: June 19, 2020
    Date of Patent: October 17, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. McDuff, Javier Hernandez Rivera, Tadas Baltrusaitis, Erroll William Wood
  • Publication number: 20230326135
    Abstract: A method for virtually representing human body poses includes receiving positioning data detailing parameters of one or more body parts of a human user based at least in part on input from one or more sensors. One or more mapping constraints are maintained that relate a model articulated representation to a target articulated representation. A model pose of the model articulated representation and a target pose of the target articulated representation are concurrently estimated based at least in part on the positioning data and the one or more mapping constraints. The previously-trained pose optimization machine is trained with training positioning data having ground truth labels for the model articulated representation. The target articulated representation is output for display with the target pose as a virtual representation of the human user.
    Type: Application
    Filed: April 11, 2022
    Publication date: October 12, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Thomas Joseph CASHMAN, Erroll William WOOD, Federica BOGO, Sasa GALIC, Pashmina Jonathan CAMERON
  • Publication number: 20230316552
    Abstract: The techniques described herein disclose a system that is configured to detect and track the three-dimensional pose of an object (e.g., a head-mounted display device) in a color image using an accessible three-dimensional model of the object. The system uses the three-dimensional pose of the object to repair pixel depth values associated with a region (e.g., a surface) of the object that is composed of material that absorbs light emitted by a time-of-flight depth sensor to determine depth. Consequently, a color-depth image (e.g., a Red-Green-Blue-Depth image or RGB-D image) can be produced that does not include dark holes on and around the region of the object that is composed of material that absorbs light emitted by the time-of-flight depth sensor.
    Type: Application
    Filed: April 4, 2022
    Publication date: October 5, 2023
    Inventors: JingJing SHEN, Erroll William WOOD, Toby SHARP, Ivan RAZUMENIC, Tadas BALTRUSAITIS, Julien Pascal Christophe VALENTIN, Predrag JOVANOVIC
  • Publication number: 20230281945
    Abstract: Keypoints are predicted in an image. A neural network is executed that is configured to predict each of the keypoints as a 2D random variable, normally distributed with a 2D position and 2×2 covariance matrix. The neural network is trained to maximize a log-likelihood that samples from each of the predicted keypoints equal a ground truth. The trained neural network is used to predict keypoints of an image without generating a heatmap.
    Type: Application
    Filed: June 28, 2022
    Publication date: September 7, 2023
    Inventors: Thomas Joseph CASHMAN, Erroll William WOOD, Martin DE LA GORCE, Tadas BALTRUSAITIS, Daniel Stephen WILDE, Jingjing SHEN, Matthew Alastair JOHNSON, Julien Pascal Christophe VALENTIN
  • Publication number: 20230281863
    Abstract: Keypoints are predicted in an image. Predictions are generated for each of the keypoints of an image as a 2D random variable, normally distributed with location (x, y) and standard deviation sigma. A neural network is trained to maximize a log-likelihood that samples from each of the predicted keypoints equal a ground truth. The trained neural network is used to predict keypoints of an image without generating a heatmap.
    Type: Application
    Filed: June 28, 2022
    Publication date: September 7, 2023
    Inventors: Julien Pascal Christophe VALENTIN, Erroll William WOOD, Thomas Joseph CASHMAN, Martin de LA GORCE, Tadas BALTRUSAITIS, Daniel Stephen WILDE, Jingjing SHEN, Matthew Alastair JOHNSON, Charles Thomas HEWITT, Nikola MILOSAVLJEVIC, Stephan Joachim GARBIN, Toby SHARP, Ivan STOJILJKOVIC
  • Patent number: 11675195
    Abstract: In various examples there is an apparatus for aligning three-dimensional, 3D, representations of people. The apparatus comprises at least one processor and a memory storing instructions that, when executed by the at least one processor, perform a method comprising accessing a first 3D representation which is an instance of a parametric model of a person; accessing a second 3D representation which is a photoreal representation of the person; computing an alignment of the first and second 3D representations; and computing and storing a hologram from the aligned first and second 3D representations such that the hologram depicts parts of the person which are observed in only one of the first and second 3D representations; or controlling an avatar representing the person where the avatar depicts parts of the person which are observed in only one of the first and second 3D representations.
    Type: Grant
    Filed: May 21, 2021
    Date of Patent: June 13, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kenneth Mitchell Jakubzak, Matthew Julian Lamb, Brent Michael Wilson, Toby Leonard Sharp, Thomas Joseph Cashman, Jamie Shotton, Erroll William Wood, Jingjing Shen
  • Publication number: 20220373800
    Abstract: In various examples there is an apparatus for aligning three-dimensional, 3D, representations of people. The apparatus comprises at least one processor and a memory storing instructions that, when executed by the at least one processor, perform a method comprising accessing a first 3D representation which is an instance of a parametric model of a person; accessing a second 3D representation which is a photoreal representation of the person; computing an alignment of the first and second 3D representations; and computing and storing a hologram from the aligned first and second 3D representations such that the hologram depicts parts of the person which are observed in only one of the first and second 3D representations; or controlling an avatar representing the person where the avatar depicts parts of the person which are observed in only one of the first and second 3D representations.
    Type: Application
    Filed: May 21, 2021
    Publication date: November 24, 2022
    Inventors: Kenneth Mitchell JAKUBZAK, Matthew Julian LAMB, Brent Michael WILSON, Toby Leonard SHARP, Thomas Joseph CASHMAN, Jamie SHOTTON, Erroll William WOOD, Jingjing SHEN
  • Publication number: 20210398337
    Abstract: Systems and methods are provided that are directed to generating video sequences including physio-realistic avatars. In examples, an albedo for an avatar is received, a sub-surface skin color associated with the albedo is modified based on physiological data associated with physiologic characteristic, and an avatar based on the albedo and the modified sub-surface skin color is rendered. The rendered avatar may then be synthesized in a frame of video. In some examples, a video including the synthesized avatar may be used to train a machine learning model to detect a physiological characteristic. The machine learning model may receive a plurality of video segments, where one or more of the video segments includes a synthetic physio-realistic avatar generated with the physiological characteristic. The machine learning model may be trained using the plurality of video segments. The trained model may be provided to a requesting entity.
    Type: Application
    Filed: June 19, 2020
    Publication date: December 23, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. MCDUFF, Javier HERNANDEZ RIVERA, Tadas BALTRUSAITIS, Erroll William WOOD
  • Patent number: 11164334
    Abstract: There is an apparatus for detecting pose of an object. The apparatus comprises a processor configured to receive captured sensor data depicting the object. It also has a memory storing a parameterized model of a class of 3D shape of which the object is a member, where an instance of the model is given as a mapping from a point in a 2D rectangular grid to a 3D position. The processor is configured to compute values of the parameters of the model by calculating an optimization to fit the model to the captured sensor data, using the parametrized mapping. The processor is configured to output the computed values of the parameters comprising at least global position and global orientation of the object.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: November 2, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Erroll William Wood, Thomas Joseph Cashman, Andrew William Fitzgibbon, Nikola Milosavljevic
  • Patent number: 11107242
    Abstract: In various examples there is an apparatus for detecting position and orientation of an object. The apparatus comprises a memory storing at least one frame of captured sensor data depicting the object. The apparatus also comprises a trained machine learning system configured to receive the frame of the sensor data and to compute a plurality of two dimensional positions in the frame. Each predicted two dimensional position is a position of sensor data in the frame depicting a keypoint, where a keypoint is a pre-specified 3D position relative to the object. At least one of the keypoints is a floating keypoint depicting a pre-specified position relative to the object, lying inside or outside the object's surface. The apparatus comprises a pose detector which computes the three dimensional position and orientation of the object using the predicted two dimensional positions and outputs the computed three dimensional position and orientation.
    Type: Grant
    Filed: March 22, 2019
    Date of Patent: August 31, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Andrew William Fitzgibbon, Erroll William Wood, Jingjing Shen, Thomas Joseph Cashman, Jamie Daniel Joseph Shotton
  • Patent number: 10867441
    Abstract: An apparatus for detecting pose of an object is described. The apparatus has a processor configured to receive captured sensor data depicting the object. The apparatus has a memory storing a model of a class of object of which the depicted object is a member, the model comprising a plurality of parameters specifying the pose, comprising global position and global orientation, of the model. The processor is configured to compute values of the parameters of the model by calculating an optimization to fit the model to the captured sensor data, wherein the optimization comprises iterated computation of updates to the values of the parameters and updates to values of variables representing correspondences between the captured sensor data and the model, the updates being interdependent in computation. The processor is configured to discard updates to values of the variables representing correspondences without applying the updates.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: December 15, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Thomas Joseph Cashman, Andrew William Fitzgibbon, Erroll William Wood, Federica Bogo, Paul Malcolm McIlroy, Christopher Douglas Edmonds
  • Patent number: 10861165
    Abstract: A method to identify one or more depth-image segments that correspond to a predetermined object type is enacted in a depth-imaging controller operatively coupled to an optical time-of-flight (ToF) camera; it comprises: receiving depth-image data from the optical ToF camera, the depth-image data exhibiting an aliasing uncertainty, such that a coordinate (X, Y) of the depth-image data maps to a periodic series of depth values {Zk}; and labeling, as corresponding to the object type, one or more coordinates of the depth-image data exhibiting the aliasing uncertainty.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: December 8, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Erroll William Wood, Michael Bleyer, Christopher Douglas Edmonds, Michael Scott Fenton, Mark James Finocchio, John Albert Judnich
  • Publication number: 20200311977
    Abstract: There is an apparatus for detecting pose of an object. The apparatus comprises a processor configured to receive captured sensor data depicting the object. It also has a memory storing a parameterized model of a class of 3D shape of which the object is a member, where an instance of the model is given as a mapping from a point in a 2D rectangular grid to a 3D position. The processor is configured to compute values of the parameters of the model by calculating an optimization to fit the model to the captured sensor data, using the parametrized mapping. The processor is configured to output the computed values of the parameters comprising at least global position and global orientation of the object.
    Type: Application
    Filed: March 29, 2019
    Publication date: October 1, 2020
    Inventors: Erroll William WOOD, Thomas Joseph CASHMAN, Andrew William FITZGIBBON, Nikola MILOSAVLJEVIC
  • Publication number: 20200265641
    Abstract: An apparatus for detecting pose of an object is described. The apparatus has a processor configured to receive captured sensor data depicting the object. The apparatus has a memory storing a model of a class of object of which the depicted object is a member, the model comprising a plurality of parameters specifying the pose, comprising global position and global orientation, of the model. The processor is configured to compute values of the parameters of the model by calculating an optimization to fit the model to the captured sensor data, wherein the optimization comprises iterated computation of updates to the values of the parameters and updates to values of variables representing correspondences between the captured sensor data and the model, the updates being interdependent in computation. The processor is configured to discard updates to values of the variables representing correspondences without applying the updates.
    Type: Application
    Filed: February 15, 2019
    Publication date: August 20, 2020
    Inventors: Thomas Joseph CASHMAN, Andrew William FITZGIBBON, Erroll William WOOD, Federica BOGO, Paul Malcolm MCILROY, Christopher Douglas EDMONDS
  • Publication number: 20200226786
    Abstract: In various examples there is an apparatus for detecting position and orientation of an object. The apparatus comprises a memory storing at least one frame of captured sensor data depicting the object. The apparatus also comprises a trained machine learning system configured to receive the frame of the sensor data and to compute a plurality of two dimensional positions in the frame. Each predicted two dimensional position is a position of sensor data in the frame depicting a keypoint, where a keypoint is a pre-specified 3D position relative to the object. At least one of the keypoints is a floating keypoint depicting a pre-specified position relative to the object, lying inside or outside the object's surface. The apparatus comprises a pose detector which computes the three dimensional position and orientation of the object using the predicted two dimensional positions and outputs the computed three dimensional position and orientation.
    Type: Application
    Filed: March 22, 2019
    Publication date: July 16, 2020
    Inventors: Andrew William FITZGIBBON, Erroll William WOOD, Jingjing SHEN, Thomas Joseph CASHMAN, Jamie Daniel Joseph SHOTTON
  • Publication number: 20200226765
    Abstract: A method to identify one or more depth-image segments that correspond to a predetermined object type is enacted in a depth-imaging controller operatively coupled to an optical time-of-flight (ToF) camera; it comprises: receiving depth-image data from the optical ToF camera, the depth-image data exhibiting an aliasing uncertainty, such that a coordinate (X, Y) of the depth-image data maps to a periodic series of depth values {Zk}; and labeling, as corresponding to the object type, one or more coordinates of the depth-image data exhibiting the aliasing uncertainty.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Erroll William WOOD, Michael BLEYER, Christopher Douglas EDMONDS, Michael Scott FENTON, Mark James FINOCCHIO, John Albert JUDNICH