Patents by Inventor Kalyan Krishna Sunkavalli
Kalyan Krishna Sunkavalli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11930303Abstract: Systems and techniques for automatic digital parameter adjustment are described that leverage insights learned from an image set to automatically predict parameter values for an input item of digital visual content. To do so, the automatic digital parameter adjustment techniques described herein captures visual and contextual features of digital visual content to determine balanced visual output in a range of visual scenes and settings. The visual and contextual features of digital visual content are used to train a parameter adjustment model through machine learning techniques that captures feature patterns and interactions. The parameter adjustment model exploits these feature interactions to determine visually pleasing parameter values for an input item of digital visual content. The predicted parameter values are output, allowing further adjustment to the parameter values.Type: GrantFiled: November 15, 2021Date of Patent: March 12, 2024Assignee: Adobe Inc.Inventors: Pulkit Gera, Oliver Wang, Kalyan Krishna Sunkavalli, Elya Shechtman, Chetan Nanda
-
Patent number: 11682126Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.Type: GrantFiled: October 26, 2020Date of Patent: June 20, 2023Assignee: ADOBE Inc.Inventors: Kalyan Krishna Sunkavalli, Sunil Hadap, Joon-Young Lee, Zhuo Hui
-
Patent number: 11669986Abstract: Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.Type: GrantFiled: April 16, 2021Date of Patent: June 6, 2023Assignees: ADOBE INC., THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, David Jay Kriegman, Ravi Ramamoorthi
-
Patent number: 11663775Abstract: Methods, system, and computer storage media are provided for generating physical-based materials for rendering digital objects with an appearance of a real-world material. Images depicted the real-world material, including diffuse component images and specular component images, are captured using different lighting patterns, which may include area lights. From the captured images, approximations of one or more material maps are determined using a photometric stereo technique. Based on the approximations and the captured images, a neural network system generates a set of material maps, such as a diffuse albedo material map, a normal material map, a specular albedo material map, and a roughness material map. The material maps from the neural network may be optimized based on a comparison of the input images of the real-world material and images rendered from the material maps.Type: GrantFiled: April 19, 2021Date of Patent: May 30, 2023Assignee: Adobe, Inc.Inventors: Akshat Dave, Kalyan Krishna Sunkavalli, Yannick Hold-Geoffroy, Milos Hasan
-
Patent number: 11551388Abstract: Image modification using detected symmetry is described. In example implementations, an image modification module detects multiple local symmetries in an original image by discovering repeated correspondences that are each related by a transformation. The transformation can include a translation, a rotation, a reflection, a scaling, or a combination thereof. Each repeated correspondence includes three patches that are similar to one another and are respectively defined by three pixels of the original image. The image modification module generates a global symmetry of the original image by analyzing an applicability to the multiple local symmetries of multiple candidate homographies contributed by the multiple local symmetries. The image modification module associates individual pixels of the original image with a global symmetry indicator to produce a global symmetry association map.Type: GrantFiled: February 19, 2020Date of Patent: January 10, 2023Assignee: Adobe Inc.Inventors: Kalyan Krishna Sunkavalli, Nathan Aaron Carr, Michal Lukác, Elya Shechtman
-
Patent number: 11488342Abstract: Embodiments of the technology described herein, make unknown material-maps in a Physically Based Rendering (PBR) asset usable through an identification process that relies, at least in part, on image analysis. In addition, when a desired material-map type is completely missing from a PBR asset the technology described herein may generate a suitable synthetic material map for use in rendering. In one aspect, the correct map type is assigned using a machine classifier, such as a convolutional neural network, which analyzes image content of the unknown material map and produce a classification. The technology described herein also correlates material maps into material definitions using a combination of the material-map type and similarity analysis. The technology described herein may generate synthetic maps to be used in place of the missing material maps. The synthetic maps may be generated using a Generative Adversarial Network (GAN).Type: GrantFiled: May 27, 2021Date of Patent: November 1, 2022Assignee: ADOBE INC.Inventors: Kalyan Krishna Sunkavalli, Yannick Hold-Geoffroy, Milos Hasan, Zexiang Xu, Yu-Ying Yeh, Stefano Corazza
-
Publication number: 20220343522Abstract: Enhanced methods and systems for generating both a geometry model and an optical-reflectance model (an object reconstruction model) for a physical object, based on a sparse set of images of the object under a sparse set of viewpoints. The geometry model is a mesh model that includes a set of vertices representing the object's surface. The reflectance model is SVBRDF that is parameterized via multiple channels (e.g., diffuse albedo, surface-roughness, specular albedo, and surface-normals). For each vertex of the geometry model, the reflectance model includes a value for each of the multiple channels. The object reconstruction model is employed to render graphical representations of a virtualized object (a VO based on the physical object) within a computation-based (e.g., a virtual or immersive) environment. Via the reconstruction model, the VO may be rendered from arbitrary viewpoints and under arbitrary lighting conditions.Type: ApplicationFiled: April 16, 2021Publication date: October 27, 2022Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, David Jay Kriegman, Ravi Ramamoorthi
-
Patent number: 11481619Abstract: Techniques for incorporating a black-box function into a neural network are described. For example, an image editing function may be the black-box function and may be wrapped into a layer of the neural network. A set of parameters and a source image are provided to the black-box function, and the output image that represents the source image with the set of parameters applied to the source image is output from the black-box function. To address the issue that the black-box function may not be differentiable, a loss optimization may calculate the gradients of the function using, for example, a finite differences calculation, and the gradients are used to train the neural network to ensure the output image is representative of an expected ground truth image.Type: GrantFiled: July 10, 2019Date of Patent: October 25, 2022Assignee: ADOBE INC.Inventors: Oliver Wang, Kevin Wampler, Kalyan Krishna Sunkavalli, Elya Shechtman, Siddhant Jain
-
Publication number: 20220335636Abstract: A scene reconstruction system renders images of a scene with high-quality geometry and appearance and supports view synthesis, relighting, and scene editing. Given a set of input images of a scene, the scene reconstruction system trains a network to learn a volume representation of the scene that includes separate geometry and reflectance parameters. Using the volume representation, the scene reconstruction system can render images of the scene under arbitrary viewing (view synthesis) and lighting (relighting) locations. Additionally, the scene reconstruction system can render images that change the reflectance of objects in the scene (scene editing).Type: ApplicationFiled: April 15, 2021Publication date: October 20, 2022Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, Milos Hasan, Yannick Hold-Geoffroy, David Jay Kriegman, Ravi Ramamoorthi
-
Publication number: 20220335682Abstract: Methods, system, and computer storage media are provided for generating physical-based materials for rendering digital objects with an appearance of a real-world material. Images depicted the real-world material, including diffuse component images and specular component images, are captured using different lighting patterns, which may include area lights. From the captured images, approximations of one or more material maps are determined using a photometric stereo technique. Based on the approximations and the captured images, a neural network system generates a set of material maps, such as a diffuse albedo material map, a normal material map, a specular albedo material map, and a roughness material map. The material maps from the neural network may be optimized based on a comparison of the input images of the real-world material and images rendered from the material maps.Type: ApplicationFiled: April 19, 2021Publication date: October 20, 2022Inventors: Akshat Dave, Kalyan Krishna Sunkavalli, Yannick Hold-Geoffroy, Milos Hasan
-
Publication number: 20220182588Abstract: Systems and techniques for automatic digital parameter adjustment are described that leverage insights learned from an image set to automatically predict parameter values for an input item of digital visual content. To do so, the automatic digital parameter adjustment techniques described herein captures visual and contextual features of digital visual content to determine balanced visual output in a range of visual scenes and settings. The visual and contextual features of digital visual content are used to train a parameter adjustment model through machine learning techniques that captures feature patterns and interactions. The parameter adjustment model exploits these feature interactions to determine visually pleasing parameter values for an input item of digital visual content. The predicted parameter values are output, allowing further adjustment to the parameter values.Type: ApplicationFiled: November 15, 2021Publication date: June 9, 2022Applicant: Adobe Inc.Inventors: Pulkit Gera, Oliver Wang, Kalyan Krishna Sunkavalli, Elya Shechtman, Chetan Nanda
-
Patent number: 11263259Abstract: Compositing aware digital image search techniques and systems are described that leverage machine learning. In one example, a compositing aware image search system employs a two-stream convolutional neural network (CNN) to jointly learn feature embeddings from foreground digital images that capture a foreground object and background digital images that capture a background scene. In order to train models of the convolutional neural networks, triplets of training digital images are used. Each triplet may include a positive foreground digital image and a positive background digital image taken from the same digital image. The triplet also contains a negative foreground or background digital image that is dissimilar to the positive foreground or background digital image that is also included as part of the triplet.Type: GrantFiled: July 15, 2020Date of Patent: March 1, 2022Assignee: Adobe Inc.Inventors: Xiaohui Shen, Zhe Lin, Kalyan Krishna Sunkavalli, Hengshuang Zhao, Brian Lynn Price
-
Patent number: 11178368Abstract: Systems and techniques for automatic digital parameter adjustment are described that leverage insights learned from an image set to automatically predict parameter values for an input item of digital visual content. To do so, the automatic digital parameter adjustment techniques described herein captures visual and contextual features of digital visual content to determine balanced visual output in a range of visual scenes and settings. The visual and contextual features of digital visual content are used to train a parameter adjustment model through machine learning techniques that captures feature patterns and interactions. The parameter adjustment model exploits these feature interactions to determine visually pleasing parameter values for an input item of digital visual content. The predicted parameter values are output, allowing further adjustment to the parameter values.Type: GrantFiled: November 26, 2019Date of Patent: November 16, 2021Assignee: Adobe Inc.Inventors: Pulkit Gera, Oliver Wang, Kalyan Krishna Sunkavalli, Elya Shechtman, Chetan Nanda
-
Patent number: 11176381Abstract: Various embodiments describe video object segmentation using a neural network and the training of the neural network. The neural network both detects a target object in the current frame based on a reference frame and a reference mask that define the target object and propagates the segmentation mask of the target object for a previous frame to the current frame to generate a segmentation mask for the current frame. In some embodiments, the neural network is pre-trained using synthetically generated static training images and is then fine-tuned using training videos.Type: GrantFiled: April 23, 2020Date of Patent: November 16, 2021Assignee: Adobe Inc.Inventors: Joon-Young Lee, Seoungwug Oh, Kalyan Krishna Sunkavalli
-
Publication number: 20210160466Abstract: Systems and techniques for automatic digital parameter adjustment are described that leverage insights learned from an image set to automatically predict parameter values for an input item of digital visual content. To do so, the automatic digital parameter adjustment techniques described herein captures visual and contextual features of digital visual content to determine balanced visual output in a range of visual scenes and settings. The visual and contextual features of digital visual content are used to train a parameter adjustment model through machine learning techniques that captures feature patterns and interactions. The parameter adjustment model exploits these feature interactions to determine visually pleasing parameter values for an input item of digital visual content. The predicted parameter values are output, allowing further adjustment to the parameter values.Type: ApplicationFiled: November 26, 2019Publication date: May 27, 2021Applicant: Adobe Inc.Inventors: Pulkit Gera, Oliver Wang, Kalyan Krishna Sunkavalli, Elya Shechtman, Chetan Nanda
-
Patent number: 10979640Abstract: The present disclosure is directed toward systems and methods for predicting lighting conditions. In particular, the systems and methods described herein analyze a single low-dynamic range digital image to estimate a set of high-dynamic range lighting conditions associated with the single low-dynamic range lighting digital image. Additionally, the systems and methods described herein train a convolutional neural network to extrapolate lighting conditions from a digital image. The systems and methods also augment low-dynamic range information from the single low-dynamic range digital image by using a sky model algorithm to predict high-dynamic range lighting conditions.Type: GrantFiled: February 12, 2020Date of Patent: April 13, 2021Assignee: ADOBE INC.Inventors: Yannick Hold-Geoffroy, Sunil S. Hadap, Kalyan Krishna Sunkavalli, Emiliano Gambaretto
-
Patent number: 10950038Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector.Type: GrantFiled: February 25, 2020Date of Patent: March 16, 2021Assignee: Adobe Inc.Inventors: Jeong Joon Park, Zhili Chen, Xin Sun, Vladimir Kim, Kalyan Krishna Sunkavalli, Duygu Ceylan Aksit
-
Publication number: 20210042944Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.Type: ApplicationFiled: October 26, 2020Publication date: February 11, 2021Inventors: KALYAN KRISHNA SUNKAVALLI, SUNIL HADAP, JOON-YOUNG LEE, ZHUO HUI
-
Publication number: 20210012189Abstract: Techniques for incorporating a black-box function into a neural network are described. For example, an image editing function may be the black-box function and may be wrapped into a layer of the neural network. A set of parameters and a source image are provided to the black-box function, and the output image that represents the source image with the set of parameters applied to the source image is output from the black-box function. To address the issue that the black-box function may not be differentiable, a loss optimization may calculate the gradients of the function using, for example, a finite differences calculation, and the gradients are used to train the neural network to ensure the output image is representative of an expected ground truth image.Type: ApplicationFiled: July 10, 2019Publication date: January 14, 2021Inventors: Oliver Wang, Kevin Wampler, Kalyan Krishna Sunkavalli, Elya Shechtman, Siddhant Jain
-
Publication number: 20200349189Abstract: Compositing aware digital image search techniques and systems are described that leverage machine learning. In one example, a compositing aware image search system employs a two-stream convolutional neural network (CNN) to jointly learn feature embeddings from foreground digital images that capture a foreground object and background digital images that capture a background scene. In order to train models of the convolutional neural networks, triplets of training digital images are used. Each triplet may include a positive foreground digital image and a positive background digital image taken from the same digital image. The triplet also contains a negative foreground or background digital image that is dissimilar to the positive foreground or background digital image that is also included as part of the triplet.Type: ApplicationFiled: July 15, 2020Publication date: November 5, 2020Applicant: Adobe Inc.Inventors: Xiaohui Shen, Zhe Lin, Kalyan Krishna Sunkavalli, Hengshuang Zhao, Brian Lynn Price