Patents by Inventor Kalyan Sunkavalli

Kalyan Sunkavalli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11972534
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a visual neural network to replace materials in a three-dimensional scene with visually similar materials from a source dataset. Specifically, the disclosed system utilizes the visual neural network to generate source deep visual features representing source texture maps from materials in a plurality of source materials. Additionally, the disclosed system utilizes the visual neural network to generate deep visual features representing texture maps from materials in a digital scene. The disclosed system then determines source texture maps that are visually similar to the texture maps of the digital scene based on visual similarity metrics that compare the source deep visual features and the deep visual features. Additionally, the disclosed system modifies the digital scene by replacing one or more of the texture maps in the digital scene with the visually similar source texture maps.
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: April 30, 2024
    Assignee: Adobe Inc.
    Inventors: Maxine Perroni-Scharf, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Jonathan Eisenmann
  • Patent number: 11935217
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating harmonized digital images utilizing a self-supervised image harmonization neural network. In particular, the disclosed systems can implement, and learn parameters for, a self-supervised image harmonization neural network to extract content from one digital image (disentangled from its appearance) and appearance from another from another digital image (disentangled from its content). For example, the disclosed systems can utilize a dual data augmentation method to generate diverse triplets for parameter learning (including input digital images, reference digital images, and pseudo ground truth digital images), via cropping a digital image with perturbations using three-dimensional color lookup tables (“LUTs”).
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: March 19, 2024
    Assignee: Adobe Inc.
    Inventors: He Zhang, Yifan Jiang, Yilin Wang, Jianming Zhang, Kalyan Sunkavalli, Sarah Kong, Su Chen, Sohrab Amirghodsi, Zhe Lin
  • Publication number: 20240062495
    Abstract: A scene modeling system receives a video including a plurality of frames corresponding to views of an object and a request to display an editable three-dimensional (3D) scene that corresponds to a particular frame of the plurality of frames. The scene modeling system applies a scene representation model to the particular frame, and includes a deformation model configured to generate, for each pixel of the particular frame based on a pose and an expression of the object, a deformation point using a 3D morphable model (3DMM) guided deformation field. The scene representation model includes a color model configured to determine, for the deformation point, color and volume density values. The scene modeling system receives a modification to one or more of the pose or the expression of the object including a modification to a location of the deformation point and renders an updated video based on the received modification.
    Type: Application
    Filed: August 21, 2022
    Publication date: February 22, 2024
    Inventors: Zhixin Shu, Zexiang Xu, Shahrukh Athar, Kalyan Sunkavalli, Elya Shechtman
  • Patent number: 11887241
    Abstract: Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Zexiang Xu, Yannick Hold-Geoffroy, Milos Hasan, Kalyan Sunkavalli, Fanbo Xiang
  • Publication number: 20240013477
    Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.
    Type: Application
    Filed: July 9, 2022
    Publication date: January 11, 2024
    Inventors: Zexiang Xu, Zhixin Shu, Sai Bi, Qiangeng Xu, Kalyan Sunkavalli, Julien Philip
  • Publication number: 20230360285
    Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.
    Type: Application
    Filed: June 26, 2023
    Publication date: November 9, 2023
    Inventors: Milos Hasan, Liang Shi, Tamy Boubekeur, Kalyan Sunkavalli, Radomir Mech
  • Publication number: 20230360327
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate three-dimensional hybrid mesh-volumetric representations for digital objects. For instance, in one or more embodiments, the disclosed systems generate a mesh for a digital object from a plurality of digital images that portray the digital object using a multi-view stereo model. Additionally, the disclosed systems determine a set of sample points for a thin volume around the mesh. Using a neural network, the disclosed systems further generate a three-dimensional hybrid mesh-volumetric representation for the digital object utilizing the set of sample points for the thin volume and the mesh.
    Type: Application
    Filed: May 3, 2022
    Publication date: November 9, 2023
    Inventors: Sai Bi, Yang Liu, Zexiang Xu, Fujun Luan, Kalyan Sunkavalli
  • Patent number: 11783184
    Abstract: Certain embodiments involve techniques for efficiently estimating denoising kernels for generating denoised images. For instance, a neural network receives a noisy reference image to denoise. The neural network uses a kernel dictionary of base kernels and generates a coefficient vector for each pixel in the reference image such that the coefficient vector includes a coefficient value for each base kernel in the kernel dictionary, where the base kernels are combined to generate a denoising kernel and each coefficient value indicates a contribution of a given base kernel to a denoising kernel. The neural network calculates the denoising kernel for a given pixel by applying the coefficient vector for that pixel to the kernel dictionary. The neural network applies each denoising kernel to the respective pixel to generate a denoised output image.
    Type: Grant
    Filed: February 2, 2022
    Date of Patent: October 10, 2023
    Assignee: Adobe Inc.
    Inventors: Federico Perazzi, Zhihao Xia, Michael Gharbi, Kalyan Sunkavalli
  • Patent number: 11776184
    Abstract: The present disclosure provides systems and methods for image editing. Embodiments of the present disclosure provide an image editing system for perform image object replacement or image region replacement (e.g., an image editing system for replacing an object or region of an image with an object or region from another image). For example, the image editing system may replace a sky portion of an image with a more desirable sky portion from a different replacement image. The original image and the replacement image (e.g., the image including a desirable object or region) include layers of masks. A sky from the replacement image may replace the sky of the image to produce an aesthetically pleasing composite image.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: October 3, 2023
    Assignee: ADOBE, INC.
    Inventors: Jianming Zhang, Alan Erickson, I-Ming Pao, Guotong Feng, Kalyan Sunkavalli, Frederick Mandia, Hyunghwan Byun, Betty Leong, Meredith Payne Stotzner, Yukie Takahashi, Quynn Megan Le, Sarah Kong
  • Publication number: 20230298148
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that implement a dual-branched neural network architecture to harmonize composite images. For example, in one or more implementations, the transformer-based harmonization system uses a convolutional branch and a transformer branch to generate a harmonized composite image based on an input composite image and a corresponding segmentation mask. More particularly, the convolutional branch comprises a series of convolutional neural network layers followed by a style normalization layer to extract localized information from the input composite image. Further, the transformer branch comprises a series of transformer neural network layers to extract global information based on different resolutions of the input composite image.
    Type: Application
    Filed: March 21, 2022
    Publication date: September 21, 2023
    Inventors: He Zhang, Jianming Zhang, Jose Ignacio Echevarria Vallespi, Kalyan Sunkavalli, Meredith Payne Stotzner, Yinglan Ma, Zhe Lin, Elya Shechtman, Frederick Mandia
  • Patent number: 11688109
    Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: June 27, 2023
    Assignee: Adobe Inc.
    Inventors: Milos Hasan, Liang Shi, Tamy Boubekeur, Kalyan Sunkavalli, Radomir Mech
  • Publication number: 20230141395
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a visual neural network to replace materials in a three-dimensional scene with visually similar materials from a source dataset. Specifically, the disclosed system utilizes the visual neural network to generate source deep visual features representing source texture maps from materials in a plurality of source materials. Additionally, the disclosed system utilizes the visual neural network to generate deep visual features representing texture maps from materials in a digital scene. The disclosed system then determines source texture maps that are visually similar to the texture maps of the digital scene based on visual similarity metrics that compare the source deep visual features and the deep visual features. Additionally, the disclosed system modifies the digital scene by replacing one or more of the texture maps in the digital scene with the visually similar source texture maps.
    Type: Application
    Filed: November 5, 2021
    Publication date: May 11, 2023
    Inventors: Maxine Perroni-Scharf, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Jonathan Eisenmann
  • Publication number: 20230122623
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly generating harmonized digital images utilizing an object-to-object harmonization neural network. For example, the disclosed systems implement, and learn parameters for, an object-to-object harmonization neural network to combine a style code from a reference object with features extracted from a target object. Indeed, the disclosed systems extract a style code from a reference object utilizing a style encoder neural network. In addition, the disclosed systems generate a harmonized target object by applying the style code of the reference object to a target object utilizing an object-to-object harmonization neural network.
    Type: Application
    Filed: October 18, 2021
    Publication date: April 20, 2023
    Inventors: He Zhang, Jeya Maria Jose Valanarasu, Jianming Zhang, Jose Ignacio Echevarria Vallespi, Kalyan Sunkavalli, Yilin Wang, Yinglan Ma, Zhe Lin, Zijun Wei
  • Publication number: 20230098115
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
    Type: Application
    Filed: December 6, 2022
    Publication date: March 30, 2023
    Inventors: Kalyan Sunkavalli, Yannick Hold-Geoffroy, Christian Gagne, Marc-Andre Gardner, Jean-Francois Lalonde
  • Publication number: 20230037282
    Abstract: Systems and methods for image editing are described. Embodiments of the present disclosure provide an image editing system for performing image object replacement or image region replacement (e.g., an image editing system for replacing an object or region of an image with an object or region from another image). For example, the image editing system may replace a sky portion of an image with a more desirable sky portion from a different replacement image. According to some embodiments described herein, real-time color harmonization based on the visible sky region may be used to produce more natural colorization. In some examples, horizon-aware sky alignment and placement with advanced padding may also be used. For example, the horizons of the original image and the replacement image may be automatically detected and aligned, and color harmonization may be performed based on the aligned images.
    Type: Application
    Filed: April 16, 2021
    Publication date: February 2, 2023
    Inventors: Alan Erickson, Kalyan Sunkavalli, I-Ming Pao, Guotong Feng, Jianming Zhang, Frederick Mandia
  • Publication number: 20230005197
    Abstract: The present disclosure provides systems and methods for image editing. Embodiments of the present disclosure provide an image editing system for perform image object replacement or image region replacement (e.g., an image editing system for replacing an object or region of an image with an object or region from another image). For example, the image editing system may replace a sky portion of an image with a more desirable sky portion from a different replacement image. The original image and the replacement image (e.g., the image including a desirable object or region) include layers of masks. A sky from the replacement image may replace the sky of the image to produce an aesthetically pleasing composite image.
    Type: Application
    Filed: March 17, 2021
    Publication date: January 5, 2023
    Inventors: JIANMING ZHANG, Alan Erickson, I-Ming Pao, Guotong Feng, Kalyan Sunkavalli, Frederick Mandia, Hyunghwan Byun, Betty Leong, Meredith Payne Stotzner, Yukie Takahashi, Quynn Megan Le, Sarah Kong
  • Patent number: 11538216
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: December 27, 2022
    Assignee: Adobe Inc.
    Inventors: Kalyan Sunkavalli, Yannick Hold-Geoffroy, Christian Gagne, Marc-Andre Gardner, Jean-Francois Lalonde
  • Publication number: 20220335671
    Abstract: Systems and methods for image editing are described. Embodiments of the present disclosure provide an image editing system for performing image object replacement or image region replacement (e.g., an image editing system for replacing an object or region of an image with an object or region from another image). For example, the image editing system may replace a sky portion of an image with a more desirable sky portion from a different replacement image. According to some embodiments described herein, real-time color harmonization based on the visible sky region may be used to produce more natural colorization. In some examples, horizon-aware sky alignment and placement with advanced padding may also be used. For example, the horizons of the original image and the replacement image may be automatically detected and aligned, and color harmonization may be performed based on the aligned images.
    Type: Application
    Filed: April 16, 2021
    Publication date: October 20, 2022
    Inventors: Alan Erickson, Kalyan Sunkavalli, I-Ming Pao, Guotong Feng, Jianming Zhang, Frederick Mandia
  • Publication number: 20220301243
    Abstract: The present disclosure provides systems and methods for image editing. Embodiments of the present disclosure provide an image editing system for perform image object replacement or image region replacement (e.g., an image editing system for replacing an object or region of an image with an object or region from another image). For example, the image editing system may replace a sky portion of an image with a more desirable sky portion from a different replacement image. The original image and the replacement image (e.g., the image including a desirable object or region) include layers of masks. A sky from the replacement image may replace the sky of the image to produce an aesthetically pleasing composite image.
    Type: Application
    Filed: March 17, 2021
    Publication date: September 22, 2022
    Inventors: JIANMING ZHANG, Alan Erickson, I-Ming Pao, Guotong Feng, Kalyan Sunkavalli, Frederick Mandia, Hyunghwan Byun, Betty Leong, Meredith Payne Stotzner, Yukie Takahashi, Quynn Megan Le, Sarah Kong
  • Publication number: 20220292654
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating harmonized digital images utilizing a self-supervised image harmonization neural network. In particular, the disclosed systems can implement, and learn parameters for, a self-supervised image harmonization neural network to extract content from one digital image (disentangled from its appearance) and appearance from another from another digital image (disentangled from its content). For example, the disclosed systems can utilize a dual data augmentation method to generate diverse triplets for parameter learning (including input digital images, reference digital images, and pseudo ground truth digital images), via cropping a digital image with perturbations using three-dimensional color lookup tables (“LUTs”).
    Type: Application
    Filed: March 12, 2021
    Publication date: September 15, 2022
    Inventors: He Zhang, Yifan Jiang, Yilin Wang, Jianming Zhang, Kalyan Sunkavalli, Sarah Kong, Su Chen, Sohrab Amirghodsi, Zhe Lin