Patents by Inventor Zexiang Xu
Zexiang Xu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240177399Abstract: Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.Type: ApplicationFiled: January 29, 2024Publication date: May 30, 2024Applicant: Adobe Inc.Inventors: Zexiang XU, Yannick HOLD-GEOFFROY, Milos HASAN, Kalyan SUNKAVALLI, Fanbo XIANG
-
Publication number: 20240169653Abstract: A scene modeling system accesses a three-dimensional (3D) scene including a 3D object. The scene modeling system applies a silhouette bidirectional texture function (SBTF) model to the 3D object to generate an output image of a textured material rendered as a surface of the 3D object. Applying the SBTF model includes determining a bounding geometry for the surface of the 3D object. Applying the SBTF model includes determining, for each pixel of the output image, a pixel value based on the bounding geometry. The scene modeling system displays, via a user interface, the output image based on the determined pixel values.Type: ApplicationFiled: November 23, 2022Publication date: May 23, 2024Inventors: Krishna Bhargava Mullia Lakshminarayana, Zexiang Xu, Milos Hasan, Fujun Luan, Alexandr Kuznetsov, Xuezheng Wang, Ravi Ramamoorthi
-
Publication number: 20240062495Abstract: A scene modeling system receives a video including a plurality of frames corresponding to views of an object and a request to display an editable three-dimensional (3D) scene that corresponds to a particular frame of the plurality of frames. The scene modeling system applies a scene representation model to the particular frame, and includes a deformation model configured to generate, for each pixel of the particular frame based on a pose and an expression of the object, a deformation point using a 3D morphable model (3DMM) guided deformation field. The scene representation model includes a color model configured to determine, for the deformation point, color and volume density values. The scene modeling system receives a modification to one or more of the pose or the expression of the object including a modification to a location of the deformation point and renders an updated video based on the received modification.Type: ApplicationFiled: August 21, 2022Publication date: February 22, 2024Inventors: Zhixin Shu, Zexiang Xu, Shahrukh Athar, Kalyan Sunkavalli, Elya Shechtman
-
Patent number: 11887241Abstract: Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.Type: GrantFiled: December 22, 2021Date of Patent: January 30, 2024Assignee: Adobe Inc.Inventors: Zexiang Xu, Yannick Hold-Geoffroy, Milos Hasan, Kalyan Sunkavalli, Fanbo Xiang
-
Publication number: 20240013477Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.Type: ApplicationFiled: July 9, 2022Publication date: January 11, 2024Inventors: Zexiang Xu, Zhixin Shu, Sai Bi, Qiangeng Xu, Kalyan Sunkavalli, Julien Philip
-
Patent number: 11816779Abstract: Methods and systems disclosed herein relate generally to surface-rendering neural networks to represent and render a variety of material appearances (e.g., textured surfaces) at different scales. The system includes receiving image metadata for a texel that includes position, incoming and outgoing radiance direction, and a kernel size. The system applies a offset-prediction neural network to the query to identify an offset coordinate for the texel. The system inputs the offset coordinate to a data structure to determine a feature vector for the texel of the textured surface. The reflectance feature vector is then processed using a decoder neural network to estimate a light-reflectance value of the texel, at which the light-reflectance value is used to render the texel of the textured surface.Type: GrantFiled: November 30, 2021Date of Patent: November 14, 2023Assignees: Adobe Inc., The Regents of the University of CaliforniaInventors: Krishna Bhargava Mullia Lakshminarayana, Zexiang Xu, Milos Hasan, Ravi Ramamoorthi, Alexandr Kuznetsov
-
Publication number: 20230360327Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate three-dimensional hybrid mesh-volumetric representations for digital objects. For instance, in one or more embodiments, the disclosed systems generate a mesh for a digital object from a plurality of digital images that portray the digital object using a multi-view stereo model. Additionally, the disclosed systems determine a set of sample points for a thin volume around the mesh. Using a neural network, the disclosed systems further generate a three-dimensional hybrid mesh-volumetric representation for the digital object utilizing the set of sample points for the thin volume and the mesh.Type: ApplicationFiled: May 3, 2022Publication date: November 9, 2023Inventors: Sai Bi, Yang Liu, Zexiang Xu, Fujun Luan, Kalyan Sunkavalli
-
Patent number: 11669986Abstract: Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.Type: GrantFiled: April 16, 2021Date of Patent: June 6, 2023Assignees: ADOBE INC., THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, David Jay Kriegman, Ravi Ramamoorthi
-
Publication number: 20230169715Abstract: Methods and systems disclosed herein relate generally to surface-rendering neural networks to represent and render a variety of material appearances (e.g., textured surfaces) at different scales. The system includes receiving image metadata for a texel that includes position, incoming and outgoing radiance direction, and a kernel size. The system applies a offset-prediction neural network to the query to identify an offset coordinate for the texel. The system inputs the offset coordinate to a data structure to determine a feature vector for the texel of the textured surface. The reflectance feature vector is then processed using a decoder neural network to estimate a light-reflectance value of the texel, at which the light-reflectance value is used to render the texel of the textured surface.Type: ApplicationFiled: November 30, 2021Publication date: June 1, 2023Inventors: Krishna Bhargava Mullia Lakshminarayana, Zexiang Xu, Milos Hasan, Ravi Ramamoorthi, Alexandr Kuznetsov
-
Patent number: 11488342Abstract: Embodiments of the technology described herein, make unknown material-maps in a Physically Based Rendering (PBR) asset usable through an identification process that relies, at least in part, on image analysis. In addition, when a desired material-map type is completely missing from a PBR asset the technology described herein may generate a suitable synthetic material map for use in rendering. In one aspect, the correct map type is assigned using a machine classifier, such as a convolutional neural network, which analyzes image content of the unknown material map and produce a classification. The technology described herein also correlates material maps into material definitions using a combination of the material-map type and similarity analysis. The technology described herein may generate synthetic maps to be used in place of the missing material maps. The synthetic maps may be generated using a Generative Adversarial Network (GAN).Type: GrantFiled: May 27, 2021Date of Patent: November 1, 2022Assignee: ADOBE INC.Inventors: Kalyan Krishna Sunkavalli, Yannick Hold-Geoffroy, Milos Hasan, Zexiang Xu, Yu-Ying Yeh, Stefano Corazza
-
Publication number: 20220343522Abstract: Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.Type: ApplicationFiled: April 16, 2021Publication date: October 27, 2022Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, David Jay Kriegman, Ravi Ramamoorthi
-
Publication number: 20220335636Abstract: A scene reconstruction system renders images of a scene with high-quality geometry and appearance and supports view synthesis, relighting, and scene editing. Given a set of input images of a scene, the scene reconstruction system trains a network to learn a volume representation of the scene that includes separate geometry and reflectance parameters. Using the volume representation, the scene reconstruction system can render images of the scene under arbitrary viewing (view synthesis) and lighting (relighting) locations. Additionally, the scene reconstruction system can render images that change the reflectance of objects in the scene (scene editing).Type: ApplicationFiled: April 15, 2021Publication date: October 20, 2022Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, Milos Hasan, Yannick Hold-Geoffroy, David Jay Kriegman, Ravi Ramamoorthi
-
Publication number: 20220198738Abstract: Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.Type: ApplicationFiled: December 22, 2021Publication date: June 23, 2022Applicant: Adobe Inc.Inventors: Zexiang XU, Yannick HOLD-GEOFFROY, Milos HASAN, Kalyan SUNKAVALLI, Fanbo XIANG
-
Patent number: 11257284Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.Type: GrantFiled: May 13, 2020Date of Patent: February 22, 2022Assignee: Adobe Inc.Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
-
Patent number: 10950037Abstract: Embodiments are generally directed to generating novel images of an object having a novel viewpoint and a novel lighting direction based on sparse images of the object. A neural network is trained with training images rendered from a 3D model. Utilizing the 3D model, training images, ground truth predictive images from particular viewpoint(s), and ground truth predictive depth maps of the ground truth predictive images, can be easily generated and fed back through the neural network for training. Once trained, the neural network can receive a sparse plurality of images of an object, a novel viewpoint, and a novel lighting direction. The neural network can generate a plane sweep volume based on the sparse plurality of images, and calculate depth probabilities for each pixel in the plane sweep volume. A predictive output image of the object, having the novel viewpoint and novel lighting direction, can be generated and output.Type: GrantFiled: July 12, 2019Date of Patent: March 16, 2021Assignee: ADOBE INC.Inventors: Kalyan K. Sunkavalli, Zexiang Xu, Sunil Hadap
-
Publication number: 20210012561Abstract: Embodiments are generally directed to generating novel images of an object having a novel viewpoint and a novel lighting direction based on sparse images of the object. A neural network is trained with training images rendered from a 3D model. Utilizing the 3D model, training images, ground truth predictive images from particular viewpoint(s), and ground truth predictive depth maps of the ground truth predictive images, can be easily generated and fed back through the neural network for training. Once trained, the neural network can receive a sparse plurality of images of an object, a novel viewpoint, and a novel lighting direction. The neural network can generate a plane sweep volume based on the sparse plurality of images, and calculate depth probabilities for each pixel in the plane sweep volume. A predictive output image of the object, having the novel viewpoint and novel lighting direction, can be generated and output.Type: ApplicationFiled: July 12, 2019Publication date: January 14, 2021Inventors: Kalyan K. Sunkavalli, Zexiang Xu, Sunil Hadap
-
Publication number: 20200273237Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.Type: ApplicationFiled: May 13, 2020Publication date: August 27, 2020Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
-
Patent number: 10692276Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.Type: GrantFiled: May 3, 2018Date of Patent: June 23, 2020Assignee: ADOBE INC.Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
-
Publication number: 20190340810Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.Type: ApplicationFiled: May 3, 2018Publication date: November 7, 2019Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap