Abstract: The model generation system may generate texture maps for a texture based on a material image. A material image is an image of a physical material that describes the color (e.g., red-green-blue (RGB) color system model) of the physical material. The model generation system may identify a material class for the physical material depicted in the material image by applying a machine-learning model to the material image. The model generation system may then identify a texture map model that generates texture maps for the physical material based on the material image. The texture map model is a machine-learning model that is trained to generate texture maps for material images of a particular material class. The texture maps generated by the texture map model may include texture maps of standard texture values, such as metalness and roughness.
Abstract: The model generation system may receive an object image and identify the parts of the object based on the object image. The model generation system may determine whether the model generation system has stored a part model that would correspond to the identified part. The model generation system may compare the portion of the object image that corresponds to the part to model images of a set of part models. The model generation system may identify a part model that best corresponds to the identified part based on similarity scores of the model images associated with the part model and the portion of the object image associated with the identified part. The model generation system may perform this process for each part of the object and then assemble an object model based on the part models for each part of the object.
Abstract: A model generation system generates three-dimensional object models based on two-dimensional images of an object. The model generation system can apply an iterative gradient decent process to model parameters for part models within an object model to compute a final set of model parameter values to generate the object model. To compute the final set of model parameter values, the model generation generates a reference image of the object model and compares the reference image to a received image. The model generation system uses a differentiable error function to score the reference image based on a received image. The model generation system updates the set of model parameter values based on the score for the reference image, and iteratively repeats the process until a reference image is sufficiently similar to the received image.
Abstract: The model generation system may generate a 3D object model based on computer aided design (CAD) data describing an object. The CAD data received by the CAD conversion module 160 may contain a set of surfaces for the object. Each surface may be described by a surface equation that describes the shape of the surface in a 3D space. The model generation system may extract those surface equations from the CAD data and generate field lines and equipotential lines. The field lines may be lines that are tangent to the gradient vector field of the surface, and the equipotential lines may be lines along the surface that designate points that have the same potential within the gradient field vector. The model generation system may use the field lines and the equipotential lines to generate quadrangular tessellations for a 3D object model for the object described by the CAD data.