Patents by Inventor Kalyan Krishna Sunkavalli
Kalyan Krishna Sunkavalli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10818022Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.Type: GrantFiled: December 21, 2018Date of Patent: October 27, 2020Assignee: ADOBE INC.Inventors: Kalyan Krishna Sunkavalli, Sunil Hadap, Joon-Young Lee, Zhuo Hui
-
Patent number: 10747811Abstract: Compositing aware digital image search techniques and systems are described that leverage machine learning. In one example, a compositing aware image search system employs a two-stream convolutional neural network (CNN) to jointly learn feature embeddings from foreground digital images that capture a foreground object and background digital images that capture a background scene. In order to train models of the convolutional neural networks, triplets of training digital images are used. Each triplet may include a positive foreground digital image and a positive background digital image taken from the same digital image. The triplet also contains a negative foreground or background digital image that is dissimilar to the positive foreground or background digital image that is also included as part of the triplet.Type: GrantFiled: May 22, 2018Date of Patent: August 18, 2020Assignee: Adobe Inc.Inventors: Xiaohui Shen, Zhe Lin, Kalyan Krishna Sunkavalli, Hengshuang Zhao, Brian Lynn Price
-
Publication number: 20200250436Abstract: Various embodiments describe video object segmentation using a neural network and the training of the neural network. The neural network both detects a target object in the current frame based on a reference frame and a reference mask that define the target object and propagates the segmentation mask of the target object for a previous frame to the current frame to generate a segmentation mask for the current frame. In some embodiments, the neural network is pre-trained using synthetically generated static training images and is then fine-tuned using training videos.Type: ApplicationFiled: April 23, 2020Publication date: August 6, 2020Inventors: Joon-Young Lee, Seoungwug Oh, Kalyan Krishna Sunkavalli
-
Publication number: 20200193696Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector.Type: ApplicationFiled: February 25, 2020Publication date: June 18, 2020Inventors: Jeong Joon Park, Zhili Chen, Xin Sun, Vladimir Kim, Kalyan Krishna Sunkavalli, Duygu Ceylan Aksit
-
Publication number: 20200186714Abstract: The present disclosure is directed toward systems and methods for predicting lighting conditions. In particular, the systems and methods described herein analyze a single low-dynamic range digital image to estimate a set of high-dynamic range lighting conditions associated with the single low-dynamic range lighting digital image. Additionally, the systems and methods described herein train a convolutional neural network to extrapolate lighting conditions from a digital image. The systems and methods also augment low-dynamic range information from the single low-dynamic range digital image by using a sky model algorithm to predict high-dynamic range lighting conditions.Type: ApplicationFiled: February 12, 2020Publication date: June 11, 2020Inventors: Yannick Hold-Geoffroy, Sunil S. Hadap, Kalyan Krishna Sunkavalli, Emiliano Gambaretto
-
Publication number: 20200184697Abstract: Image modification using detected symmetry is described. In example implementations, an image modification module detects multiple local symmetries in an original image by discovering repeated correspondences that are each related by a transformation. The transformation can include a translation, a rotation, a reflection, a scaling, or a combination thereof. Each repeated correspondence includes three patches that are similar to one another and are respectively defined by three pixels of the original image. The image modification module generates a global symmetry of the original image by analyzing an applicability to the multiple local symmetries of multiple candidate homographies contributed by the multiple local symmetries. The image modification module associates individual pixels of the original image with a global symmetry indicator to produce a global symmetry association map.Type: ApplicationFiled: February 19, 2020Publication date: June 11, 2020Applicant: Adobe Inc.Inventors: Kalyan Krishna Sunkavalli, Nathan Aaron Carr, Michal Lukác, Elya Shechtman
-
Patent number: 10671855Abstract: Various embodiments describe video object segmentation using a neural network and the training of the neural network. The neural network both detects a target object in the current frame based on a reference frame and a reference mask that define the target object and propagates the segmentation mask of the target object for a previous frame to the current frame to generate a segmentation mask for the current frame. In some embodiments, the neural network is pre-trained using synthetically generated static training images and is then fine-tuned using training videos.Type: GrantFiled: April 10, 2018Date of Patent: June 2, 2020Assignee: Adobe Inc.Inventors: Joon-Young Lee, Seoungwug Oh, Kalyan Krishna Sunkavalli
-
Patent number: 10609286Abstract: The present disclosure is directed toward systems and methods for predicting lighting conditions. In particular, the systems and methods described herein analyze a single low-dynamic range digital image to estimate a set of high-dynamic range lighting conditions associated with the single low-dynamic range lighting digital image. Additionally, the systems and methods described herein train a convolutional neural network to extrapolate lighting conditions from a digital image. The systems and methods also augment low-dynamic range information from the single low-dynamic range digital image by using a sky model algorithm to predict high-dynamic range lighting conditions.Type: GrantFiled: June 13, 2017Date of Patent: March 31, 2020Assignee: Adobe Inc.Inventors: Yannick Hold-Geoffroy, Sunil S. Hadap, Kalyan Krishna Sunkavalli, Emiliano Gambaretto
-
Patent number: 10600239Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector.Type: GrantFiled: January 22, 2018Date of Patent: March 24, 2020Assignee: Adobe Inc.Inventors: Jeong Joon Park, Zhili Chen, Xin Sun, Vladimir Kim, Kalyan Krishna Sunkavalli, Duygu Ceylan Aksit
-
Patent number: 10573040Abstract: Image modification using detected symmetry is described. In example implementations, an image modification module detects multiple local symmetries in an original image by discovering repeated correspondences that are each related by a transformation. The transformation can include a translation, a rotation, a reflection, a scaling, or a combination thereof. Each repeated correspondence includes three patches that are similar to one another and are respectively defined by three pixels of the original image. The image modification module generates a global symmetry of the original image by analyzing an applicability to the multiple local symmetries of multiple candidate homographies contributed by the multiple local symmetries. The image modification module associates individual pixels of the original image with a global symmetry indicator to produce a global symmetry association map.Type: GrantFiled: November 8, 2016Date of Patent: February 25, 2020Assignee: Adobe Inc.Inventors: Kalyan Krishna Sunkavalli, Nathan Aaron Carr, Michal Lukac, Elya Shechtman
-
Publication number: 20190361994Abstract: Compositing aware digital image search techniques and systems are described that leverage machine learning. In one example, a compositing aware image search system employs a two-stream convolutional neural network (CNN) to jointly learn feature embeddings from foreground digital images that capture a foreground object and background digital images that capture a background scene. In order to train models of the convolutional neural networks, triplets of training digital images are used. Each triplet may include a positive foreground digital image and a positive background digital image taken from the same digital image. The triplet also contains a negative foreground or background digital image that is dissimilar to the positive foreground or background digital image that is also included as part of the triplet.Type: ApplicationFiled: May 22, 2018Publication date: November 28, 2019Applicant: Adobe Inc.Inventors: Xiaohui Shen, Zhe Lin, Kalyan Krishna Sunkavalli, Hengshuang Zhao, Brian Lynn Price
-
Publication number: 20190311202Abstract: Various embodiments describe video object segmentation using a neural network and the training of the neural network. The neural network both detects a target object in the current frame based on a reference frame and a reference mask that define the target object and propagates the segmentation mask of the target object for a previous frame to the current frame to generate a segmentation mask for the current frame. In some embodiments, the neural network is pre-trained using synthetically generated static training images and is then fine-tuned using training videos.Type: ApplicationFiled: April 10, 2018Publication date: October 10, 2019Inventors: Joon-Young Lee, Seoungwug Oh, Kalyan Krishna Sunkavalli
-
Publication number: 20190228567Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector.Type: ApplicationFiled: January 22, 2018Publication date: July 25, 2019Inventors: Jeong Joon Park, Zhili Chen, Xin Sun, Vladimir Kim, Kalyan Krishna Sunkavalli, Duygu Ceylan Aksit
-
Publication number: 20190130588Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.Type: ApplicationFiled: December 21, 2018Publication date: May 2, 2019Inventors: KALYAN KRISHNA SUNKAVALLI, SUNIL HADAP, JOON-YOUNG LEE, ZHUO HUI
-
Patent number: 10181199Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.Type: GrantFiled: May 8, 2017Date of Patent: January 15, 2019Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Kalyan Krishna Sunkavalli, Sunil Hadap, Joon-Young Lee, Zhuo Hui
-
Publication number: 20180359416Abstract: The present disclosure is directed toward systems and methods for predicting lighting conditions. In particular, the systems and methods described herein analyze a single low-dynamic range digital image to estimate a set of high-dynamic range lighting conditions associated with the single low-dynamic range lighting digital image. Additionally, the systems and methods described herein train a convolutional neural network to extrapolate lighting conditions from a digital image. The systems and methods also augment low-dynamic range information from the single low-dynamic range digital image by using a sky model algorithm to predict high-dynamic range lighting conditions.Type: ApplicationFiled: June 13, 2017Publication date: December 13, 2018Inventors: Yannick Hold-Geoffroy, Sunil S. Hadap, Kalyan Krishna Sunkavalli, Emiliano Gambaretto
-
Publication number: 20180322644Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.Type: ApplicationFiled: May 8, 2017Publication date: November 8, 2018Inventors: KALYAN KRISHNA SUNKAVALLI, SUNIL HADAP, JOON-YOUNG LEE, ZHUO HUI
-
Patent number: 10116897Abstract: Photometric stabilization for time-compressed video is described. Initially, video content captured by a video capturing device is time-compressed by selecting a subset of frames from the video content according to a frame sampling technique. Photometric characteristics are then stabilized across the frames of the time-compressed video. This involves determining correspondences of pixels in adjacent frames of the time-compressed video. Photometric transformations are then determined that describe how photometric characteristics (e.g., one or both of luminance and chrominance) change between the adjacent frames, given movement of objects through the captured scene. Based on the determined photometric transformations, filters are computed for smoothing photometric characteristic changes across the time-compressed video. Photometrically stabilized time-compressed video is generated from the time-compressed video by using the filters to smooth the photometric characteristic changes.Type: GrantFiled: March 1, 2017Date of Patent: October 30, 2018Assignee: Adobe Systems IncorporatedInventors: Joon-Young Lee, Zhaowen Wang, Xuaner Zhang, Kalyan Krishna Sunkavalli
-
Publication number: 20180255273Abstract: Photometric stabilization for time-compressed video is described. Initially, video content captured by a video capturing device is time-compressed by selecting a subset of frames from the video content according to a frame sampling technique. Photometric characteristics are then stabilized across the frames of the time-compressed video. This involves determining correspondences of pixels in adjacent frames of the time-compressed video. Photometric transformations are then determined that describe how photometric characteristics (e.g., one or both of luminance and chrominance) change between the adjacent frames, given movement of objects through the captured scene. Based on the determined photometric transformations, filters are computed for smoothing photometric characteristic changes across the time-compressed video. Photometrically stabilized time-compressed video is generated from the time-compressed video by using the filters to smooth the photometric characteristic changes.Type: ApplicationFiled: March 1, 2017Publication date: September 6, 2018Applicant: Adobe Systems IncorporatedInventors: Joon-Young Lee, Zhaowen Wang, Xuaner Zhang, Kalyan Krishna Sunkavalli
-
Publication number: 20180130241Abstract: Image modification using detected symmetry is described. In example implementations, an image modification module detects multiple local symmetries in an original image by discovering repeated correspondences that are each related by a transformation. The transformation can include a translation, a rotation, a reflection, a scaling, or a combination thereof. Each repeated correspondence includes three patches that are similar to one another and are respectively defined by three pixels of the original image. The image modification module generates a global symmetry of the original image by analyzing an applicability to the multiple local symmetries of multiple candidate homographies contributed by the multiple local symmetries. The image modification module associates individual pixels of the original image with a global symmetry indicator to produce a global symmetry association map.Type: ApplicationFiled: November 8, 2016Publication date: May 10, 2018Applicant: Adobe Systems IncorporatedInventors: Kalyan Krishna Sunkavalli, Nathan Aaron Carr, Michal Lukác, Elya Shechtman