Patents by Inventor Kalyan K. Sunkavalli
Kalyan K. Sunkavalli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11875446Abstract: Aspects of a system and method for procedural media generation include generating a sequence of operator types using a node generation network; generating a sequence of operator parameters for each operator type of the sequence of operator types using a parameter generation network; generating a sequence of directed edges based on the sequence of operator types using an edge generation network; combining the sequence of operator types, the sequence of operator parameters, and the sequence of directed edges to obtain a procedural media generator, wherein each node of the procedural media generator comprises an operator that includes an operator type from the sequence of operator types, a corresponding sequence of operator parameters, and an input connection or an output connection from the sequence of directed edges that connects the node to another node of the procedural media generator; and generating a media asset using the procedural media generator.Type: GrantFiled: May 6, 2022Date of Patent: January 16, 2024Assignee: ADOBE, INC.Inventors: Paul Augusto Guerrero, Milos Hasan, Kalyan K. Sunkavalli, Radomir Mech, Tamy Boubekeur, Niloy Jyoti Mitra
-
Publication number: 20230360310Abstract: Aspects of a system and method for procedural media generation include generating a sequence of operator types using a node generation network; generating a sequence of operator parameters for each operator type of the sequence of operator types using a parameter generation network; generating a sequence of directed edges based on the sequence of operator types using an edge generation network; combining the sequence of operator types, the sequence of operator parameters, and the sequence of directed edges to obtain a procedural media generator, wherein each node of the procedural media generator comprises an operator that includes an operator type from the sequence of operator types, a corresponding sequence of operator parameters, and an input connection or an output connection from the sequence of directed edges that connects the node to another node of the procedural media generator; and generating a media asset using the procedural media generator.Type: ApplicationFiled: May 6, 2022Publication date: November 9, 2023Inventors: Paul Augusto Guerrero, Milos Hasan, Kalyan K. Sunkavalli, Radomir Mech, Tamy Boubekeur, Niloy Jyoti Mitra
-
Patent number: 10964060Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters.Type: GrantFiled: November 6, 2019Date of Patent: March 30, 2021Assignee: ADOBE INC.Inventors: Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Matthew David Fisher, Jonathan Eisenmann, Emiliano Gambaretto
-
Patent number: 10957026Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate high-dynamic range lighting parameters for input low-dynamic range images. The high-dynamic range lighting parameters can be based on sky color, sky turbidity, sun color, sun shape, and sun position. Such input low-dynamic range images can be low-dynamic range panorama images or low-dynamic range standard images. Such a neural network system can apply the estimates high-dynamic range lighting parameters to objects added to the low-dynamic range images.Type: GrantFiled: September 9, 2019Date of Patent: March 23, 2021Assignee: Adobe Inc.Inventors: Jinsong Zhang, Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Jonathan Eisenmann, Jean-Francois Lalonde
-
Patent number: 10950037Abstract: Embodiments are generally directed to generating novel images of an object having a novel viewpoint and a novel lighting direction based on sparse images of the object. A neural network is trained with training images rendered from a 3D model. Utilizing the 3D model, training images, ground truth predictive images from particular viewpoint(s), and ground truth predictive depth maps of the ground truth predictive images, can be easily generated and fed back through the neural network for training. Once trained, the neural network can receive a sparse plurality of images of an object, a novel viewpoint, and a novel lighting direction. The neural network can generate a plane sweep volume based on the sparse plurality of images, and calculate depth probabilities for each pixel in the plane sweep volume. A predictive output image of the object, having the novel viewpoint and novel lighting direction, can be generated and output.Type: GrantFiled: July 12, 2019Date of Patent: March 16, 2021Assignee: ADOBE INC.Inventors: Kalyan K. Sunkavalli, Zexiang Xu, Sunil Hadap
-
Publication number: 20210073955Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate high-dynamic range lighting parameters for input low-dynamic range images. The high-dynamic range lighting parameters can be based on sky color, sky turbidity, sun color, sun shape, and sun position. Such input low-dynamic range images can be low-dynamic range panorama images or low-dynamic range standard images. Such a neural network system can apply the estimates high-dynamic range lighting parameters to objects added to the low-dynamic range images.Type: ApplicationFiled: September 9, 2019Publication date: March 11, 2021Inventors: Jinsong Zhang, Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Jonathan Eisenmann, Jean-Francois Lalonde
-
Patent number: 10936909Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate lighting parameters for input images where the input images are synthetic and real low-dynamic range images. Such a neural network system can be trained using differences between a simple scene rendered using the estimated lighting parameters and the same simple scene rendered using known ground-truth lighting parameters. Such a neural network system can also be trained such that the synthetic and real low-dynamic range images are mapped in roughly the same distribution. Such a trained neural network system can be used to input a low-dynamic range image determine high-dynamic range lighting parameters.Type: GrantFiled: November 12, 2018Date of Patent: March 2, 2021Assignee: Adobe Inc.Inventors: Kalyan K. Sunkavalli, Sunil Hadap, Jonathan Eisenmann, Jinsong Zhang, Emiliano Gambaretto
-
Publication number: 20210012561Abstract: Embodiments are generally directed to generating novel images of an object having a novel viewpoint and a novel lighting direction based on sparse images of the object. A neural network is trained with training images rendered from a 3D model. Utilizing the 3D model, training images, ground truth predictive images from particular viewpoint(s), and ground truth predictive depth maps of the ground truth predictive images, can be easily generated and fed back through the neural network for training. Once trained, the neural network can receive a sparse plurality of images of an object, a novel viewpoint, and a novel lighting direction. The neural network can generate a plane sweep volume based on the sparse plurality of images, and calculate depth probabilities for each pixel in the plane sweep volume. A predictive output image of the object, having the novel viewpoint and novel lighting direction, can be generated and output.Type: ApplicationFiled: July 12, 2019Publication date: January 14, 2021Inventors: Kalyan K. Sunkavalli, Zexiang Xu, Sunil Hadap
-
Patent number: 10867416Abstract: Methods and systems are provided for generating harmonized images for input composite images. A neural network system can be trained, where the training includes training a neural network that generates harmonized images for input composite images. This training is performed based on a comparison of a training harmonized image and a reference image, where the reference image is modified to generate a training input composite image used to generate the training harmonized image. In addition, a mask of a region can be input to limit the area of the input image that is to be modified. Such a trained neural network system can be used to input a composite image and mask pair for which the trained system will output a harmonized image.Type: GrantFiled: March 10, 2017Date of Patent: December 15, 2020Assignee: ADOBE INC.Inventors: Xiaohui Shen, Zhe Lin, Yi-Hsuan Tsai, Xin Lu, Kalyan K. Sunkavalli
-
Patent number: 10762608Abstract: Embodiments of the present disclosure relate to a sky editing system and related processes for sky editing. The sky editing system includes a composition detector to determine the composition of a target image. A sky search engine in the sky editing system is configured to find a reference image with similar composition with the target image. Subsequently, a sky editor replaces content of the sky in the target image with content of the sky in the reference image. As such, the sky editing system transforms the target image into a new image with a preferred sky background.Type: GrantFiled: August 31, 2018Date of Patent: September 1, 2020Assignee: ADOBE INC.Inventors: Xiaohui Shen, Yi-Hsuan Tsai, Kalyan K. Sunkavalli, Zhe Lin
-
Publication number: 20200151509Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate lighting parameters for input images where the input images are synthetic and real low-dynamic range images. Such a neural network system can be trained using differences between a simple scene rendered using the estimated lighting parameters and the same simple scene rendered using known ground-truth lighting parameters. Such a neural network system can also be trained such that the synthetic and real low-dynamic range images are mapped in roughly the same distribution. Such a trained neural network system can be used to input a low-dynamic range image determine high-dynamic range lighting parameters.Type: ApplicationFiled: November 12, 2018Publication date: May 14, 2020Inventors: Kalyan K. Sunkavalli, Sunil Hadap, Jonathan Eisenmann, Jinsong Zhang, Emiliano Gambaretto
-
Patent number: 10607329Abstract: Methods and systems are provided for using a single image of an indoor scene to estimate illumination of an environment that includes the portion captured in the image. A neural network system may be trained to estimate illumination by generating recovery light masks indicating a probability of each pixel within the larger environment being a light source. Additionally, low-frequency RGB images may be generated that indicating low-frequency information for the environment. The neural network system may be trained using training input images that are extracted from known panoramic images. Once trained, the neural network system infers plausible illumination information from a single image to realistically illumination images and objects being manipulated in graphics applications, such as with image compositing, modeling, and reconstruction.Type: GrantFiled: March 13, 2017Date of Patent: March 31, 2020Assignee: ADOBE INC.Inventors: Kalyan K. Sunkavalli, Xiaohui Shen, Mehmet Ersin Yumer, Marc-André Gardner, Emiliano Gambaretto
-
Publication number: 20200074682Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters.Type: ApplicationFiled: November 6, 2019Publication date: March 5, 2020Inventors: Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Matthew David Fisher, Jonathan Eisenmann, Emiliano Gambaretto
-
Patent number: 10521892Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed at relighting a target image based on a lighting effect from a reference image. In one embodiment, a target image and a reference image are received, the reference image includes a lighting effect desired to be applied to the target image. A lighting transfer is performed using color data and geometrical data associated with the reference image and color data and geometrical data associated with the target image. The lighting transfer causes generation of a relit image that corresponds with the target image having a lighting effect of the reference image. The relit image is provided for display to a user via one or more output devices. Other embodiments may be described and/or claimed.Type: GrantFiled: August 31, 2016Date of Patent: December 31, 2019Assignee: ADOBE INC.Inventors: Kalyan K. Sunkavalli, Sunil Hadap, Elya Shechtman, Zhixin Shu
-
Patent number: 10515460Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters.Type: GrantFiled: November 29, 2017Date of Patent: December 24, 2019Assignee: ADOBE INC.Inventors: Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Matthew David Fisher, Jonathan Eisenmann, Emiliano Gambaretto
-
Patent number: 10311574Abstract: A digital medium environment includes an image processing application that performs object segmentation on an input image. An improved object segmentation method implemented by the image processing application comprises receiving an input image that includes an object region to be segmented by a segmentation process, processing the input image to provide a first segmentation that defines the object region, and processing the first segmentation to provide a second segmentation that provides pixel-wise label assignments for the object region. In some implementations, the image processing application performs improved sky segmentation on an input image containing a depiction of a sky.Type: GrantFiled: December 22, 2017Date of Patent: June 4, 2019Assignee: Adobe Inc.Inventors: Xiaohui Shen, Zhe Lin, Yi-Hsuan Tsai, Kalyan K. Sunkavalli
-
Publication number: 20190164312Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters.Type: ApplicationFiled: November 29, 2017Publication date: May 30, 2019Inventors: Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Matthew David Fisher, Jonathan Eisenmann, Emiliano Gambaretto
-
Patent number: 10297045Abstract: Fast intrinsic images techniques are described. In one or more implementations, a combination of local constraints on shading and reflectance and non-local constraints on reflectance are applied to an image to generate a linear system of equations. The linear system of equations can be solved to generate a reflectance intrinsic image and a shading intrinsic image for the image. In one or more implementations, a multi-scale parallelized iterative solver is used to solve the linear system of equations to generate the reflectance intrinsic image and the shading intrinsic image.Type: GrantFiled: November 18, 2014Date of Patent: May 21, 2019Assignee: Adobe Inc.Inventor: Kalyan K. Sunkavalli
-
Patent number: 10264229Abstract: Embodiments of the present invention facilitate lighting and material editing. More particularly, some embodiments are directed to leveraging flash photography to capture two images in quick succession, one with the flash activated and one without. In embodiments, a scene may be decomposed into components corresponding to diffidently colored lights and into diffuse and specular components. This enables the color and intensity of each light in the scene, as well as the amount of specularity, to be edited by a user to change the appearance of the scene.Type: GrantFiled: August 14, 2017Date of Patent: April 16, 2019Assignee: Adobe Inc.Inventors: Kalyan K. Sunkavalli, Zhuo Hui, Sunil Hadap
-
Publication number: 20180374199Abstract: Embodiments of the present disclosure relate to a sky editing system and related processes for sky editing. The sky editing system includes a composition detector to determine the composition of a target image. A sky search engine in the sky editing system is configured to find a reference image with similar composition with the target image. Subsequently, a sky editor replaces content of the sky in the target image with content of the sky in the reference image. As such, the sky editing system transforms the target image into a new image with a preferred sky background.Type: ApplicationFiled: August 31, 2018Publication date: December 27, 2018Inventors: Xiaohui Shen, Yi-Hsuan Tsai, Kalyan K. Sunkavalli, Zhe Lin