Patents by Inventor Geoffrey Wedig

Geoffrey Wedig has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200005138
    Abstract: Systems and methods are provided for interpolation of disparate inputs. A radial basis function neural network (RBFNN) may be used to interpolate the pose of a digital character. Input parameters to the RBFNN may be separated by data type (e.g. angular vs. linear) and manipulated within the RBFNN by distance functions specific to the data type (e.g. use an angular distance function for the angular input data). A weight may be applied to each distance to compensate for input data representing different variables (e.g. clavicle vs. shoulder). The output parameters of the RBFNN may be a set of independent values, which may be combined into combination values (e.g. representing x, y, z, w angular value in SO(3) space).
    Type: Application
    Filed: June 19, 2019
    Publication date: January 2, 2020
    Inventor: Geoffrey Wedig
  • Publication number: 20190362529
    Abstract: Skinning parameters used to animate a virtual avatar can include mesh weights and joint transforms of a skeleton. Systems and methods are provided for determining skinning parameters using an optimization process subject to constraints based on human-understandable or anatomically-motivated relationships among skeletal joints. Input to the optimization process can include a high-order skeleton and the applied constraints can dynamically change during the optimization. The skinning parameters can be used in linear blend skinning (LBS) applications in augmented reality.
    Type: Application
    Filed: May 20, 2019
    Publication date: November 28, 2019
    Inventors: Geoffrey Wedig, Sean Michael Comer, James Jonathan Bancroft
  • Publication number: 20190265783
    Abstract: Methods and systems for aligning head scans of a subject for a virtual avatar can be based on locating eyes of the subject in the scans. After one or more eyeball models are fitted to reference candidate points of a sclera of each eyeball of the subject in a reference head scan, an additional reference point can be inferred from the eyeball models. The eyeball models can be fitted to candidate points of the sclera of each eyeball of the subject in another head scan and an additional point can be inferred from the fitted eyeball models. An affine transformation can be determined between the head scans based on the eyeball models fitted to the candidate points in the reference head scan and the other head scan and the additional points inferred. The methods and systems can be used for rigging or animating the virtual avatar.
    Type: Application
    Filed: February 20, 2019
    Publication date: August 29, 2019
    Inventor: Geoffrey Wedig