Patents by Inventor Zexiang Xu

Zexiang Xu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240062495
    Abstract: A scene modeling system receives a video including a plurality of frames corresponding to views of an object and a request to display an editable three-dimensional (3D) scene that corresponds to a particular frame of the plurality of frames. The scene modeling system applies a scene representation model to the particular frame, and includes a deformation model configured to generate, for each pixel of the particular frame based on a pose and an expression of the object, a deformation point using a 3D morphable model (3DMM) guided deformation field. The scene representation model includes a color model configured to determine, for the deformation point, color and volume density values. The scene modeling system receives a modification to one or more of the pose or the expression of the object including a modification to a location of the deformation point and renders an updated video based on the received modification.
    Type: Application
    Filed: August 21, 2022
    Publication date: February 22, 2024
    Inventors: Zhixin Shu, Zexiang Xu, Shahrukh Athar, Kalyan Sunkavalli, Elya Shechtman
  • Patent number: 11887241
    Abstract: Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Zexiang Xu, Yannick Hold-Geoffroy, Milos Hasan, Kalyan Sunkavalli, Fanbo Xiang
  • Publication number: 20240013477
    Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.
    Type: Application
    Filed: July 9, 2022
    Publication date: January 11, 2024
    Inventors: Zexiang Xu, Zhixin Shu, Sai Bi, Qiangeng Xu, Kalyan Sunkavalli, Julien Philip
  • Patent number: 11816779
    Abstract: Methods and systems disclosed herein relate generally to surface-rendering neural networks to represent and render a variety of material appearances (e.g., textured surfaces) at different scales. The system includes receiving image metadata for a texel that includes position, incoming and outgoing radiance direction, and a kernel size. The system applies a offset-prediction neural network to the query to identify an offset coordinate for the texel. The system inputs the offset coordinate to a data structure to determine a feature vector for the texel of the textured surface. The reflectance feature vector is then processed using a decoder neural network to estimate a light-reflectance value of the texel, at which the light-reflectance value is used to render the texel of the textured surface.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: November 14, 2023
    Assignees: Adobe Inc., The Regents of the University of California
    Inventors: Krishna Bhargava Mullia Lakshminarayana, Zexiang Xu, Milos Hasan, Ravi Ramamoorthi, Alexandr Kuznetsov
  • Publication number: 20230360327
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate three-dimensional hybrid mesh-volumetric representations for digital objects. For instance, in one or more embodiments, the disclosed systems generate a mesh for a digital object from a plurality of digital images that portray the digital object using a multi-view stereo model. Additionally, the disclosed systems determine a set of sample points for a thin volume around the mesh. Using a neural network, the disclosed systems further generate a three-dimensional hybrid mesh-volumetric representation for the digital object utilizing the set of sample points for the thin volume and the mesh.
    Type: Application
    Filed: May 3, 2022
    Publication date: November 9, 2023
    Inventors: Sai Bi, Yang Liu, Zexiang Xu, Fujun Luan, Kalyan Sunkavalli
  • Patent number: 11669986
    Abstract: Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: June 6, 2023
    Assignees: ADOBE INC., THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, David Jay Kriegman, Ravi Ramamoorthi
  • Publication number: 20230169715
    Abstract: Methods and systems disclosed herein relate generally to surface-rendering neural networks to represent and render a variety of material appearances (e.g., textured surfaces) at different scales. The system includes receiving image metadata for a texel that includes position, incoming and outgoing radiance direction, and a kernel size. The system applies a offset-prediction neural network to the query to identify an offset coordinate for the texel. The system inputs the offset coordinate to a data structure to determine a feature vector for the texel of the textured surface. The reflectance feature vector is then processed using a decoder neural network to estimate a light-reflectance value of the texel, at which the light-reflectance value is used to render the texel of the textured surface.
    Type: Application
    Filed: November 30, 2021
    Publication date: June 1, 2023
    Inventors: Krishna Bhargava Mullia Lakshminarayana, Zexiang Xu, Milos Hasan, Ravi Ramamoorthi, Alexandr Kuznetsov
  • Patent number: 11488342
    Abstract: Embodiments of the technology described herein, make unknown material-maps in a Physically Based Rendering (PBR) asset usable through an identification process that relies, at least in part, on image analysis. In addition, when a desired material-map type is completely missing from a PBR asset the technology described herein may generate a suitable synthetic material map for use in rendering. In one aspect, the correct map type is assigned using a machine classifier, such as a convolutional neural network, which analyzes image content of the unknown material map and produce a classification. The technology described herein also correlates material maps into material definitions using a combination of the material-map type and similarity analysis. The technology described herein may generate synthetic maps to be used in place of the missing material maps. The synthetic maps may be generated using a Generative Adversarial Network (GAN).
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: November 1, 2022
    Assignee: ADOBE INC.
    Inventors: Kalyan Krishna Sunkavalli, Yannick Hold-Geoffroy, Milos Hasan, Zexiang Xu, Yu-Ying Yeh, Stefano Corazza
  • Publication number: 20220343522
    Abstract: Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.
    Type: Application
    Filed: April 16, 2021
    Publication date: October 27, 2022
    Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, David Jay Kriegman, Ravi Ramamoorthi
  • Publication number: 20220335636
    Abstract: A scene reconstruction system renders images of a scene with high-quality geometry and appearance and supports view synthesis, relighting, and scene editing. Given a set of input images of a scene, the scene reconstruction system trains a network to learn a volume representation of the scene that includes separate geometry and reflectance parameters. Using the volume representation, the scene reconstruction system can render images of the scene under arbitrary viewing (view synthesis) and lighting (relighting) locations. Additionally, the scene reconstruction system can render images that change the reflectance of objects in the scene (scene editing).
    Type: Application
    Filed: April 15, 2021
    Publication date: October 20, 2022
    Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, Milos Hasan, Yannick Hold-Geoffroy, David Jay Kriegman, Ravi Ramamoorthi
  • Publication number: 20220198738
    Abstract: Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.
    Type: Application
    Filed: December 22, 2021
    Publication date: June 23, 2022
    Applicant: Adobe Inc.
    Inventors: Zexiang XU, Yannick HOLD-GEOFFROY, Milos HASAN, Kalyan SUNKAVALLI, Fanbo XIANG
  • Patent number: 11257284
    Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: February 22, 2022
    Assignee: Adobe Inc.
    Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
  • Patent number: 10950037
    Abstract: Embodiments are generally directed to generating novel images of an object having a novel viewpoint and a novel lighting direction based on sparse images of the object. A neural network is trained with training images rendered from a 3D model. Utilizing the 3D model, training images, ground truth predictive images from particular viewpoint(s), and ground truth predictive depth maps of the ground truth predictive images, can be easily generated and fed back through the neural network for training. Once trained, the neural network can receive a sparse plurality of images of an object, a novel viewpoint, and a novel lighting direction. The neural network can generate a plane sweep volume based on the sparse plurality of images, and calculate depth probabilities for each pixel in the plane sweep volume. A predictive output image of the object, having the novel viewpoint and novel lighting direction, can be generated and output.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: March 16, 2021
    Assignee: ADOBE INC.
    Inventors: Kalyan K. Sunkavalli, Zexiang Xu, Sunil Hadap
  • Publication number: 20210012561
    Abstract: Embodiments are generally directed to generating novel images of an object having a novel viewpoint and a novel lighting direction based on sparse images of the object. A neural network is trained with training images rendered from a 3D model. Utilizing the 3D model, training images, ground truth predictive images from particular viewpoint(s), and ground truth predictive depth maps of the ground truth predictive images, can be easily generated and fed back through the neural network for training. Once trained, the neural network can receive a sparse plurality of images of an object, a novel viewpoint, and a novel lighting direction. The neural network can generate a plane sweep volume based on the sparse plurality of images, and calculate depth probabilities for each pixel in the plane sweep volume. A predictive output image of the object, having the novel viewpoint and novel lighting direction, can be generated and output.
    Type: Application
    Filed: July 12, 2019
    Publication date: January 14, 2021
    Inventors: Kalyan K. Sunkavalli, Zexiang Xu, Sunil Hadap
  • Publication number: 20200273237
    Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.
    Type: Application
    Filed: May 13, 2020
    Publication date: August 27, 2020
    Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
  • Patent number: 10692276
    Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.
    Type: Grant
    Filed: May 3, 2018
    Date of Patent: June 23, 2020
    Assignee: ADOBE INC.
    Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
  • Publication number: 20190340810
    Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.
    Type: Application
    Filed: May 3, 2018
    Publication date: November 7, 2019
    Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap