Patents by Inventor Kalyan Sunkavalli

Kalyan Sunkavalli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11443412
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: September 13, 2022
    Assignee: ADOBE INC.
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Publication number: 20220198738
    Abstract: Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.
    Type: Application
    Filed: December 22, 2021
    Publication date: June 23, 2022
    Applicant: Adobe Inc.
    Inventors: Zexiang XU, Yannick HOLD-GEOFFROY, Milos HASAN, Kalyan SUNKAVALLI, Fanbo XIANG
  • Publication number: 20220156588
    Abstract: Certain embodiments involve techniques for efficiently estimating denoising kernels for generating denoised images. For instance, a neural network receives a noisy reference image to denoise. The neural network uses a kernel dictionary of base kernels and generates a coefficient vector for each pixel in the reference image such that the coefficient vector includes a coefficient value for each base kernel in the kernel dictionary, where the base kernels are combined to generate a denoising kernel and each coefficient value indicates a contribution of a given base kernel to a denoising kernel. The neural network calculates the denoising kernel for a given pixel by applying the coefficient vector for that pixel to the kernel dictionary. The neural network applies each denoising kernel to the respective pixel to generate a denoised output image.
    Type: Application
    Filed: February 2, 2022
    Publication date: May 19, 2022
    Inventors: Federico Perazzi, Zhihao Xia, Michael Gharbi, Kalyan Sunkavalli
  • Patent number: 11281970
    Abstract: Certain embodiments involve techniques for efficiently estimating denoising kernels for generating denoised images. For instance, a neural network receives a noisy reference image to denoise. The neural network uses a kernel dictionary of base kernels and generates a coefficient vector for each pixel in the reference image such that the coefficient vector includes a coefficient value for each base kernel in the kernel dictionary, where the base kernels are combined to generate a denoising kernel and each coefficient value indicates a contribution of a given base kernel to a denoising kernel. The neural network calculates the denoising kernel for a given pixel by applying the coefficient vector for that pixel to the kernel dictionary. The neural network applies each denoising kernel to the respective pixel to generate a denoised output image.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: March 22, 2022
    Assignee: Adobe Inc.
    Inventors: Federico Perazzi, Zhihao Xia, Michael Gharbi, Kalyan Sunkavalli
  • Patent number: 11257284
    Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: February 22, 2022
    Assignee: Adobe Inc.
    Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
  • Publication number: 20220051453
    Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.
    Type: Application
    Filed: October 28, 2021
    Publication date: February 17, 2022
    Inventors: Milos Hasan, Liang Shi, Tamy Boubekeur, Kalyan Sunkavalli, Radomir Mech
  • Patent number: 11189060
    Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: November 30, 2021
    Assignee: ADOBE INC.
    Inventors: Milos Hasan, Liang Shi, Tamy Boubekeur, Kalyan Sunkavalli, Radomir Mech
  • Publication number: 20210343051
    Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.
    Type: Application
    Filed: April 30, 2020
    Publication date: November 4, 2021
    Inventors: Milos Hasan, Liang Shi, Tamy Boubekeur, Kalyan Sunkavalli, Radomir Mech
  • Patent number: 11158090
    Abstract: This disclosure involves training generative adversarial networks to shot-match two unmatched images in a context-sensitive manner. For example, aspects of the present disclosure include accessing a trained generative adversarial network including a trained generator model and a trained discriminator model. A source image and a reference image may be inputted into the generator model to generate a modified source image. The modified source image and the reference image may be inputted into the discriminator model to determine a likelihood that the modified source image is color-matched with the reference image. The modified source image may be outputted as a shot-match with the reference image in response to determining, using the discriminator model, that the modified source image and the reference image are color-matched.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: October 26, 2021
    Assignee: Adobe Inc.
    Inventors: Tharun Mohandoss, Pulkit Gera, Oliver Wang, Kartik Sethi, Kalyan Sunkavalli, Elya Shechtman, Chetan Nanda
  • Patent number: 11158117
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: October 26, 2021
    Assignee: Adobe Inc.
    Inventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Mathieu Garon
  • Patent number: 11042990
    Abstract: Systems and techniques for automatic object replacement in an image include receiving an original image and a preferred image. The original image is automatically segmented into an original image foreground region and an original image object region. The preferred image is automatically segmented into a preferred image foreground region and a preferred image object region. A composite image is automatically composed by replacing the original image object region with the preferred image object region such that the composite image includes the original image foreground region and the preferred image object region. An attribute of the composite image is automatically adjusted.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: June 22, 2021
    Assignee: ADOBE INC.
    Inventors: I-Ming Pao, Sarah Aye Kong, Alan Lee Erickson, Kalyan Sunkavalli, Hyunghwan Byun
  • Publication number: 20210158570
    Abstract: This disclosure involves training generative adversarial networks to shot-match two unmatched images in a context-sensitive manner. For example, aspects of the present disclosure include accessing a trained generative adversarial network including a trained generator model and a trained discriminator model. A source image and a reference image may be inputted into the generator model to generate a modified source image. The modified source image and the reference image may be inputted into the discriminator model to determine a likelihood that the modified source image is color-matched with the reference image. The modified source image may be outputted as a shot-match with the reference image in response to determining, using the discriminator model, that the modified source image and the reference image are color-matched.
    Type: Application
    Filed: November 22, 2019
    Publication date: May 27, 2021
    Inventors: Tharun Mohandoss, Pulkit Gera, Oliver Wang, Kartik Sethi, Kalyan Sunkavalli, Elya Shechtman, Chetan Nanda
  • Publication number: 20210150333
    Abstract: Certain embodiments involve techniques for efficiently estimating denoising kernels for generating denoised images. For instance, a neural network receives a noisy reference image to denoise. The neural network uses a kernel dictionary of base kernels and generates a coefficient vector for each pixel in the reference image such that the coefficient vector includes a coefficient value for each base kernel in the kernel dictionary, where the base kernels are combined to generate a denoising kernel and each coefficient value indicates a contribution of a given base kernel to a denoising kernel. The neural network calculates the denoising kernel for a given pixel by applying the coefficient vector for that pixel to the kernel dictionary. The neural network applies each denoising kernel to the respective pixel to generate a denoised output image.
    Type: Application
    Filed: November 18, 2019
    Publication date: May 20, 2021
    Inventors: Federico Perazzi, Zhihao Xia, Michael Gharbi, Kalyan Sunkavalli
  • Publication number: 20210065440
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
    Type: Application
    Filed: September 3, 2019
    Publication date: March 4, 2021
    Inventors: Kalyan Sunkavalli, Yannick Hold-Geoffroy, Christian Gagne, Marc-Andre Gardner, Jean-Francois Lalonde
  • Patent number: 10810469
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for extracting material properties from a single digital image portraying one or more materials by utilizing a neural network encoder, a neural network material classifier, and one or more neural network material property decoders. In particular, in one or more embodiments, the disclosed systems and methods train the neural network encoder, the neural network material classifier, and one or more neural network material property decoders to accurately extract material properties from a single digital image portraying one or more materials. Furthermore, in one or more embodiments, the disclosed systems and methods train and utilize a rendering layer to generate model images from the extracted material properties.
    Type: Grant
    Filed: May 9, 2018
    Date of Patent: October 20, 2020
    Assignee: ADOBE INC.
    Inventors: Kalyan Sunkavalli, Zhengqin Li, Manmohan Chandraker
  • Publication number: 20200302684
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene.
    Type: Application
    Filed: May 18, 2020
    Publication date: September 24, 2020
    Inventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Mathieu Garon
  • Publication number: 20200273237
    Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.
    Type: Application
    Filed: May 13, 2020
    Publication date: August 27, 2020
    Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
  • Patent number: 10692276
    Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.
    Type: Grant
    Filed: May 3, 2018
    Date of Patent: June 23, 2020
    Assignee: ADOBE INC.
    Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
  • Patent number: 10692265
    Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.
    Type: Grant
    Filed: November 7, 2019
    Date of Patent: June 23, 2020
    Assignee: Adobe Inc.
    Inventors: Sunil Hadap, Elya Shechtman, Zhixin Shu, Kalyan Sunkavalli, Mehmet Yumer
  • Patent number: 10692277
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: June 23, 2020
    Assignee: ADOBE INC.
    Inventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Mathieu Garon