Patents by Inventor Kalyan Sunkavalli
Kalyan Sunkavalli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11443412Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.Type: GrantFiled: November 8, 2019Date of Patent: September 13, 2022Assignee: ADOBE INC.Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
-
Publication number: 20220198738Abstract: Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.Type: ApplicationFiled: December 22, 2021Publication date: June 23, 2022Applicant: Adobe Inc.Inventors: Zexiang XU, Yannick HOLD-GEOFFROY, Milos HASAN, Kalyan SUNKAVALLI, Fanbo XIANG
-
Publication number: 20220156588Abstract: Certain embodiments involve techniques for efficiently estimating denoising kernels for generating denoised images. For instance, a neural network receives a noisy reference image to denoise. The neural network uses a kernel dictionary of base kernels and generates a coefficient vector for each pixel in the reference image such that the coefficient vector includes a coefficient value for each base kernel in the kernel dictionary, where the base kernels are combined to generate a denoising kernel and each coefficient value indicates a contribution of a given base kernel to a denoising kernel. The neural network calculates the denoising kernel for a given pixel by applying the coefficient vector for that pixel to the kernel dictionary. The neural network applies each denoising kernel to the respective pixel to generate a denoised output image.Type: ApplicationFiled: February 2, 2022Publication date: May 19, 2022Inventors: Federico Perazzi, Zhihao Xia, Michael Gharbi, Kalyan Sunkavalli
-
Patent number: 11281970Abstract: Certain embodiments involve techniques for efficiently estimating denoising kernels for generating denoised images. For instance, a neural network receives a noisy reference image to denoise. The neural network uses a kernel dictionary of base kernels and generates a coefficient vector for each pixel in the reference image such that the coefficient vector includes a coefficient value for each base kernel in the kernel dictionary, where the base kernels are combined to generate a denoising kernel and each coefficient value indicates a contribution of a given base kernel to a denoising kernel. The neural network calculates the denoising kernel for a given pixel by applying the coefficient vector for that pixel to the kernel dictionary. The neural network applies each denoising kernel to the respective pixel to generate a denoised output image.Type: GrantFiled: November 18, 2019Date of Patent: March 22, 2022Assignee: Adobe Inc.Inventors: Federico Perazzi, Zhihao Xia, Michael Gharbi, Kalyan Sunkavalli
-
Patent number: 11257284Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.Type: GrantFiled: May 13, 2020Date of Patent: February 22, 2022Assignee: Adobe Inc.Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
-
Publication number: 20220051453Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.Type: ApplicationFiled: October 28, 2021Publication date: February 17, 2022Inventors: Milos Hasan, Liang Shi, Tamy Boubekeur, Kalyan Sunkavalli, Radomir Mech
-
Patent number: 11189060Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.Type: GrantFiled: April 30, 2020Date of Patent: November 30, 2021Assignee: ADOBE INC.Inventors: Milos Hasan, Liang Shi, Tamy Boubekeur, Kalyan Sunkavalli, Radomir Mech
-
Publication number: 20210343051Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.Type: ApplicationFiled: April 30, 2020Publication date: November 4, 2021Inventors: Milos Hasan, Liang Shi, Tamy Boubekeur, Kalyan Sunkavalli, Radomir Mech
-
Patent number: 11158090Abstract: This disclosure involves training generative adversarial networks to shot-match two unmatched images in a context-sensitive manner. For example, aspects of the present disclosure include accessing a trained generative adversarial network including a trained generator model and a trained discriminator model. A source image and a reference image may be inputted into the generator model to generate a modified source image. The modified source image and the reference image may be inputted into the discriminator model to determine a likelihood that the modified source image is color-matched with the reference image. The modified source image may be outputted as a shot-match with the reference image in response to determining, using the discriminator model, that the modified source image and the reference image are color-matched.Type: GrantFiled: November 22, 2019Date of Patent: October 26, 2021Assignee: Adobe Inc.Inventors: Tharun Mohandoss, Pulkit Gera, Oliver Wang, Kartik Sethi, Kalyan Sunkavalli, Elya Shechtman, Chetan Nanda
-
Patent number: 11158117Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene.Type: GrantFiled: May 18, 2020Date of Patent: October 26, 2021Assignee: Adobe Inc.Inventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Mathieu Garon
-
Patent number: 11042990Abstract: Systems and techniques for automatic object replacement in an image include receiving an original image and a preferred image. The original image is automatically segmented into an original image foreground region and an original image object region. The preferred image is automatically segmented into a preferred image foreground region and a preferred image object region. A composite image is automatically composed by replacing the original image object region with the preferred image object region such that the composite image includes the original image foreground region and the preferred image object region. An attribute of the composite image is automatically adjusted.Type: GrantFiled: October 31, 2018Date of Patent: June 22, 2021Assignee: ADOBE INC.Inventors: I-Ming Pao, Sarah Aye Kong, Alan Lee Erickson, Kalyan Sunkavalli, Hyunghwan Byun
-
Publication number: 20210158570Abstract: This disclosure involves training generative adversarial networks to shot-match two unmatched images in a context-sensitive manner. For example, aspects of the present disclosure include accessing a trained generative adversarial network including a trained generator model and a trained discriminator model. A source image and a reference image may be inputted into the generator model to generate a modified source image. The modified source image and the reference image may be inputted into the discriminator model to determine a likelihood that the modified source image is color-matched with the reference image. The modified source image may be outputted as a shot-match with the reference image in response to determining, using the discriminator model, that the modified source image and the reference image are color-matched.Type: ApplicationFiled: November 22, 2019Publication date: May 27, 2021Inventors: Tharun Mohandoss, Pulkit Gera, Oliver Wang, Kartik Sethi, Kalyan Sunkavalli, Elya Shechtman, Chetan Nanda
-
Publication number: 20210150333Abstract: Certain embodiments involve techniques for efficiently estimating denoising kernels for generating denoised images. For instance, a neural network receives a noisy reference image to denoise. The neural network uses a kernel dictionary of base kernels and generates a coefficient vector for each pixel in the reference image such that the coefficient vector includes a coefficient value for each base kernel in the kernel dictionary, where the base kernels are combined to generate a denoising kernel and each coefficient value indicates a contribution of a given base kernel to a denoising kernel. The neural network calculates the denoising kernel for a given pixel by applying the coefficient vector for that pixel to the kernel dictionary. The neural network applies each denoising kernel to the respective pixel to generate a denoised output image.Type: ApplicationFiled: November 18, 2019Publication date: May 20, 2021Inventors: Federico Perazzi, Zhihao Xia, Michael Gharbi, Kalyan Sunkavalli
-
Publication number: 20210065440Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.Type: ApplicationFiled: September 3, 2019Publication date: March 4, 2021Inventors: Kalyan Sunkavalli, Yannick Hold-Geoffroy, Christian Gagne, Marc-Andre Gardner, Jean-Francois Lalonde
-
Patent number: 10810469Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for extracting material properties from a single digital image portraying one or more materials by utilizing a neural network encoder, a neural network material classifier, and one or more neural network material property decoders. In particular, in one or more embodiments, the disclosed systems and methods train the neural network encoder, the neural network material classifier, and one or more neural network material property decoders to accurately extract material properties from a single digital image portraying one or more materials. Furthermore, in one or more embodiments, the disclosed systems and methods train and utilize a rendering layer to generate model images from the extracted material properties.Type: GrantFiled: May 9, 2018Date of Patent: October 20, 2020Assignee: ADOBE INC.Inventors: Kalyan Sunkavalli, Zhengqin Li, Manmohan Chandraker
-
Publication number: 20200302684Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene.Type: ApplicationFiled: May 18, 2020Publication date: September 24, 2020Inventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Mathieu Garon
-
Publication number: 20200273237Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.Type: ApplicationFiled: May 13, 2020Publication date: August 27, 2020Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
-
Patent number: 10692276Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.Type: GrantFiled: May 3, 2018Date of Patent: June 23, 2020Assignee: ADOBE INC.Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
-
Patent number: 10692265Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.Type: GrantFiled: November 7, 2019Date of Patent: June 23, 2020Assignee: Adobe Inc.Inventors: Sunil Hadap, Elya Shechtman, Zhixin Shu, Kalyan Sunkavalli, Mehmet Yumer
-
Patent number: 10692277Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene.Type: GrantFiled: March 21, 2019Date of Patent: June 23, 2020Assignee: ADOBE INC.Inventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Mathieu Garon