Patents by Inventor Erroll William WOOD
Erroll William WOOD has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12062140Abstract: Computing an image depicting a face having an expression with wrinkles is described. A 3D polygon mesh model of a face has a non-neutral expression. A tension map is computed from the 3D polygon mesh model. A neutral texture, a compressed wrinkle texture and an expanded wrinkle texture are computed or obtained from a library. The neutral texture comprises a map of the first face with a neutral expression. The compressed wrinkle texture is a map of the first face formed by aggregating maps of the first face with different expressions using the tension map, and the expanded wrinkle texture comprises a map of the first face formed by aggregating maps of the first face with different expressions using the tension map. A graphics engine may be used to apply the wrinkle textures to the 3D model according to the tension map; and render the image from the 3D model.Type: GrantFiled: September 1, 2022Date of Patent: August 13, 2024Assignee: Microsoft Technology Licensing, LLC.Inventors: Tadas Baltrusaitis, Charles Thomas Hewitt, Erroll William Wood, Chirag Anantha Raman
-
Patent number: 11954801Abstract: A method for virtually representing human body poses includes receiving positioning data detailing parameters of one or more body parts of a human user based at least in part on input from one or more sensors. One or more mapping constraints are maintained that relate a model articulated representation to a target articulated representation. A model pose of the model articulated representation and a target pose of the target articulated representation are concurrently estimated based at least in part on the positioning data and the one or more mapping constraints. The previously-trained pose optimization machine is trained with training positioning data having ground truth labels for the model articulated representation. The target articulated representation is output for display with the target pose as a virtual representation of the human user.Type: GrantFiled: April 11, 2022Date of Patent: April 9, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Thomas Joseph Cashman, Erroll William Wood, Federica Bogo, Sasa Galic, Pashmina Jonathan Cameron
-
Publication number: 20240078755Abstract: Computing an image depicting a face having an expression with wrinkles is described. A 3D polygon mesh model of a face has a non-neutral expression. A tension map is computed from the 3D polygon mesh model. A neutral texture, a compressed wrinkle texture and an expanded wrinkle texture are computed or obtained from a library. The neutral texture comprises a map of the first face with a neutral expression. The compressed wrinkle texture is a map of the first face formed by aggregating maps of the first face with different expressions using the tension map, and the expanded wrinkle texture comprises a map of the first face formed by aggregating maps of the first face with different expressions using the tension map. A graphics engine may be used to apply the wrinkle textures to the 3D model according to the tension map; and render the image from the 3D model.Type: ApplicationFiled: September 1, 2022Publication date: March 7, 2024Inventors: Tadas BALTRUSAITIS, Charles Thomas HEWITT, Erroll William WOOD, Chirag Anantha RAMAN
-
Publication number: 20230419581Abstract: Systems and methods are provided that are directed to generating video sequences including physio-realistic avatars. In examples, an albedo for an avatar is received, a sub-surface skin color associated with the albedo is modified based on physiological data associated with physiologic characteristic, and an avatar based on the albedo and the modified sub-surface skin color is rendered. The rendered avatar may then be synthesized in a frame of video. In some examples, a video including the synthesized avatar may be used to train a machine learning model to detect a physiological characteristic. The machine learning model may receive a plurality of video segments, where one or more of the video segments includes a synthetic physio-realistic avatar generated with the physiological characteristic. The machine learning model may be trained using the plurality of video segments. The trained model may be provided to a requesting entity.Type: ApplicationFiled: September 11, 2023Publication date: December 28, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Daniel J. MCDUFF, Javier HERNANDEZ RIVERA, Tadas BALTRUSAITIS, Erroll William WOOD
-
Patent number: 11790586Abstract: Systems and methods are provided that are directed to generating video sequences including physio-realistic avatars. In examples, an albedo for an avatar is received, a sub-surface skin color associated with the albedo is modified based on physiological data associated with physiologic characteristic, and an avatar based on the albedo and the modified sub-surface skin color is rendered. The rendered avatar may then be synthesized in a frame of video. In some examples, a video including the synthesized avatar may be used to train a machine learning model to detect a physiological characteristic. The machine learning model may receive a plurality of video segments, where one or more of the video segments includes a synthetic physio-realistic avatar generated with the physiological characteristic. The machine learning model may be trained using the plurality of video segments. The trained model may be provided to a requesting entity.Type: GrantFiled: June 19, 2020Date of Patent: October 17, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Daniel J. McDuff, Javier Hernandez Rivera, Tadas Baltrusaitis, Erroll William Wood
-
Publication number: 20230326135Abstract: A method for virtually representing human body poses includes receiving positioning data detailing parameters of one or more body parts of a human user based at least in part on input from one or more sensors. One or more mapping constraints are maintained that relate a model articulated representation to a target articulated representation. A model pose of the model articulated representation and a target pose of the target articulated representation are concurrently estimated based at least in part on the positioning data and the one or more mapping constraints. The previously-trained pose optimization machine is trained with training positioning data having ground truth labels for the model articulated representation. The target articulated representation is output for display with the target pose as a virtual representation of the human user.Type: ApplicationFiled: April 11, 2022Publication date: October 12, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Thomas Joseph CASHMAN, Erroll William WOOD, Federica BOGO, Sasa GALIC, Pashmina Jonathan CAMERON
-
Publication number: 20230316552Abstract: The techniques described herein disclose a system that is configured to detect and track the three-dimensional pose of an object (e.g., a head-mounted display device) in a color image using an accessible three-dimensional model of the object. The system uses the three-dimensional pose of the object to repair pixel depth values associated with a region (e.g., a surface) of the object that is composed of material that absorbs light emitted by a time-of-flight depth sensor to determine depth. Consequently, a color-depth image (e.g., a Red-Green-Blue-Depth image or RGB-D image) can be produced that does not include dark holes on and around the region of the object that is composed of material that absorbs light emitted by the time-of-flight depth sensor.Type: ApplicationFiled: April 4, 2022Publication date: October 5, 2023Inventors: JingJing SHEN, Erroll William WOOD, Toby SHARP, Ivan RAZUMENIC, Tadas BALTRUSAITIS, Julien Pascal Christophe VALENTIN, Predrag JOVANOVIC
-
Publication number: 20230281945Abstract: Keypoints are predicted in an image. A neural network is executed that is configured to predict each of the keypoints as a 2D random variable, normally distributed with a 2D position and 2×2 covariance matrix. The neural network is trained to maximize a log-likelihood that samples from each of the predicted keypoints equal a ground truth. The trained neural network is used to predict keypoints of an image without generating a heatmap.Type: ApplicationFiled: June 28, 2022Publication date: September 7, 2023Inventors: Thomas Joseph CASHMAN, Erroll William WOOD, Martin DE LA GORCE, Tadas BALTRUSAITIS, Daniel Stephen WILDE, Jingjing SHEN, Matthew Alastair JOHNSON, Julien Pascal Christophe VALENTIN
-
Publication number: 20230281863Abstract: Keypoints are predicted in an image. Predictions are generated for each of the keypoints of an image as a 2D random variable, normally distributed with location (x, y) and standard deviation sigma. A neural network is trained to maximize a log-likelihood that samples from each of the predicted keypoints equal a ground truth. The trained neural network is used to predict keypoints of an image without generating a heatmap.Type: ApplicationFiled: June 28, 2022Publication date: September 7, 2023Inventors: Julien Pascal Christophe VALENTIN, Erroll William WOOD, Thomas Joseph CASHMAN, Martin de LA GORCE, Tadas BALTRUSAITIS, Daniel Stephen WILDE, Jingjing SHEN, Matthew Alastair JOHNSON, Charles Thomas HEWITT, Nikola MILOSAVLJEVIC, Stephan Joachim GARBIN, Toby SHARP, Ivan STOJILJKOVIC
-
Patent number: 11675195Abstract: In various examples there is an apparatus for aligning three-dimensional, 3D, representations of people. The apparatus comprises at least one processor and a memory storing instructions that, when executed by the at least one processor, perform a method comprising accessing a first 3D representation which is an instance of a parametric model of a person; accessing a second 3D representation which is a photoreal representation of the person; computing an alignment of the first and second 3D representations; and computing and storing a hologram from the aligned first and second 3D representations such that the hologram depicts parts of the person which are observed in only one of the first and second 3D representations; or controlling an avatar representing the person where the avatar depicts parts of the person which are observed in only one of the first and second 3D representations.Type: GrantFiled: May 21, 2021Date of Patent: June 13, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Kenneth Mitchell Jakubzak, Matthew Julian Lamb, Brent Michael Wilson, Toby Leonard Sharp, Thomas Joseph Cashman, Jamie Shotton, Erroll William Wood, Jingjing Shen
-
Publication number: 20220373800Abstract: In various examples there is an apparatus for aligning three-dimensional, 3D, representations of people. The apparatus comprises at least one processor and a memory storing instructions that, when executed by the at least one processor, perform a method comprising accessing a first 3D representation which is an instance of a parametric model of a person; accessing a second 3D representation which is a photoreal representation of the person; computing an alignment of the first and second 3D representations; and computing and storing a hologram from the aligned first and second 3D representations such that the hologram depicts parts of the person which are observed in only one of the first and second 3D representations; or controlling an avatar representing the person where the avatar depicts parts of the person which are observed in only one of the first and second 3D representations.Type: ApplicationFiled: May 21, 2021Publication date: November 24, 2022Inventors: Kenneth Mitchell JAKUBZAK, Matthew Julian LAMB, Brent Michael WILSON, Toby Leonard SHARP, Thomas Joseph CASHMAN, Jamie SHOTTON, Erroll William WOOD, Jingjing SHEN
-
Publication number: 20210398337Abstract: Systems and methods are provided that are directed to generating video sequences including physio-realistic avatars. In examples, an albedo for an avatar is received, a sub-surface skin color associated with the albedo is modified based on physiological data associated with physiologic characteristic, and an avatar based on the albedo and the modified sub-surface skin color is rendered. The rendered avatar may then be synthesized in a frame of video. In some examples, a video including the synthesized avatar may be used to train a machine learning model to detect a physiological characteristic. The machine learning model may receive a plurality of video segments, where one or more of the video segments includes a synthetic physio-realistic avatar generated with the physiological characteristic. The machine learning model may be trained using the plurality of video segments. The trained model may be provided to a requesting entity.Type: ApplicationFiled: June 19, 2020Publication date: December 23, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Daniel J. MCDUFF, Javier HERNANDEZ RIVERA, Tadas BALTRUSAITIS, Erroll William WOOD
-
Patent number: 11164334Abstract: There is an apparatus for detecting pose of an object. The apparatus comprises a processor configured to receive captured sensor data depicting the object. It also has a memory storing a parameterized model of a class of 3D shape of which the object is a member, where an instance of the model is given as a mapping from a point in a 2D rectangular grid to a 3D position. The processor is configured to compute values of the parameters of the model by calculating an optimization to fit the model to the captured sensor data, using the parametrized mapping. The processor is configured to output the computed values of the parameters comprising at least global position and global orientation of the object.Type: GrantFiled: March 29, 2019Date of Patent: November 2, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Erroll William Wood, Thomas Joseph Cashman, Andrew William Fitzgibbon, Nikola Milosavljevic
-
Patent number: 11107242Abstract: In various examples there is an apparatus for detecting position and orientation of an object. The apparatus comprises a memory storing at least one frame of captured sensor data depicting the object. The apparatus also comprises a trained machine learning system configured to receive the frame of the sensor data and to compute a plurality of two dimensional positions in the frame. Each predicted two dimensional position is a position of sensor data in the frame depicting a keypoint, where a keypoint is a pre-specified 3D position relative to the object. At least one of the keypoints is a floating keypoint depicting a pre-specified position relative to the object, lying inside or outside the object's surface. The apparatus comprises a pose detector which computes the three dimensional position and orientation of the object using the predicted two dimensional positions and outputs the computed three dimensional position and orientation.Type: GrantFiled: March 22, 2019Date of Patent: August 31, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Andrew William Fitzgibbon, Erroll William Wood, Jingjing Shen, Thomas Joseph Cashman, Jamie Daniel Joseph Shotton
-
Patent number: 10867441Abstract: An apparatus for detecting pose of an object is described. The apparatus has a processor configured to receive captured sensor data depicting the object. The apparatus has a memory storing a model of a class of object of which the depicted object is a member, the model comprising a plurality of parameters specifying the pose, comprising global position and global orientation, of the model. The processor is configured to compute values of the parameters of the model by calculating an optimization to fit the model to the captured sensor data, wherein the optimization comprises iterated computation of updates to the values of the parameters and updates to values of variables representing correspondences between the captured sensor data and the model, the updates being interdependent in computation. The processor is configured to discard updates to values of the variables representing correspondences without applying the updates.Type: GrantFiled: February 15, 2019Date of Patent: December 15, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Thomas Joseph Cashman, Andrew William Fitzgibbon, Erroll William Wood, Federica Bogo, Paul Malcolm McIlroy, Christopher Douglas Edmonds
-
Patent number: 10861165Abstract: A method to identify one or more depth-image segments that correspond to a predetermined object type is enacted in a depth-imaging controller operatively coupled to an optical time-of-flight (ToF) camera; it comprises: receiving depth-image data from the optical ToF camera, the depth-image data exhibiting an aliasing uncertainty, such that a coordinate (X, Y) of the depth-image data maps to a periodic series of depth values {Zk}; and labeling, as corresponding to the object type, one or more coordinates of the depth-image data exhibiting the aliasing uncertainty.Type: GrantFiled: March 11, 2019Date of Patent: December 8, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Erroll William Wood, Michael Bleyer, Christopher Douglas Edmonds, Michael Scott Fenton, Mark James Finocchio, John Albert Judnich
-
Publication number: 20200311977Abstract: There is an apparatus for detecting pose of an object. The apparatus comprises a processor configured to receive captured sensor data depicting the object. It also has a memory storing a parameterized model of a class of 3D shape of which the object is a member, where an instance of the model is given as a mapping from a point in a 2D rectangular grid to a 3D position. The processor is configured to compute values of the parameters of the model by calculating an optimization to fit the model to the captured sensor data, using the parametrized mapping. The processor is configured to output the computed values of the parameters comprising at least global position and global orientation of the object.Type: ApplicationFiled: March 29, 2019Publication date: October 1, 2020Inventors: Erroll William WOOD, Thomas Joseph CASHMAN, Andrew William FITZGIBBON, Nikola MILOSAVLJEVIC
-
Publication number: 20200265641Abstract: An apparatus for detecting pose of an object is described. The apparatus has a processor configured to receive captured sensor data depicting the object. The apparatus has a memory storing a model of a class of object of which the depicted object is a member, the model comprising a plurality of parameters specifying the pose, comprising global position and global orientation, of the model. The processor is configured to compute values of the parameters of the model by calculating an optimization to fit the model to the captured sensor data, wherein the optimization comprises iterated computation of updates to the values of the parameters and updates to values of variables representing correspondences between the captured sensor data and the model, the updates being interdependent in computation. The processor is configured to discard updates to values of the variables representing correspondences without applying the updates.Type: ApplicationFiled: February 15, 2019Publication date: August 20, 2020Inventors: Thomas Joseph CASHMAN, Andrew William FITZGIBBON, Erroll William WOOD, Federica BOGO, Paul Malcolm MCILROY, Christopher Douglas EDMONDS
-
Publication number: 20200226786Abstract: In various examples there is an apparatus for detecting position and orientation of an object. The apparatus comprises a memory storing at least one frame of captured sensor data depicting the object. The apparatus also comprises a trained machine learning system configured to receive the frame of the sensor data and to compute a plurality of two dimensional positions in the frame. Each predicted two dimensional position is a position of sensor data in the frame depicting a keypoint, where a keypoint is a pre-specified 3D position relative to the object. At least one of the keypoints is a floating keypoint depicting a pre-specified position relative to the object, lying inside or outside the object's surface. The apparatus comprises a pose detector which computes the three dimensional position and orientation of the object using the predicted two dimensional positions and outputs the computed three dimensional position and orientation.Type: ApplicationFiled: March 22, 2019Publication date: July 16, 2020Inventors: Andrew William FITZGIBBON, Erroll William WOOD, Jingjing SHEN, Thomas Joseph CASHMAN, Jamie Daniel Joseph SHOTTON
-
Publication number: 20200226765Abstract: A method to identify one or more depth-image segments that correspond to a predetermined object type is enacted in a depth-imaging controller operatively coupled to an optical time-of-flight (ToF) camera; it comprises: receiving depth-image data from the optical ToF camera, the depth-image data exhibiting an aliasing uncertainty, such that a coordinate (X, Y) of the depth-image data maps to a periodic series of depth values {Zk}; and labeling, as corresponding to the object type, one or more coordinates of the depth-image data exhibiting the aliasing uncertainty.Type: ApplicationFiled: March 11, 2019Publication date: July 16, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Erroll William WOOD, Michael BLEYER, Christopher Douglas EDMONDS, Michael Scott FENTON, Mark James FINOCCHIO, John Albert JUDNICH