Patents by Inventor Kalyan Krishna Sunkavalli

Kalyan Krishna Sunkavalli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10818022
    Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: October 27, 2020
    Assignee: ADOBE INC.
    Inventors: Kalyan Krishna Sunkavalli, Sunil Hadap, Joon-Young Lee, Zhuo Hui
  • Patent number: 10747811
    Abstract: Compositing aware digital image search techniques and systems are described that leverage machine learning. In one example, a compositing aware image search system employs a two-stream convolutional neural network (CNN) to jointly learn feature embeddings from foreground digital images that capture a foreground object and background digital images that capture a background scene. In order to train models of the convolutional neural networks, triplets of training digital images are used. Each triplet may include a positive foreground digital image and a positive background digital image taken from the same digital image. The triplet also contains a negative foreground or background digital image that is dissimilar to the positive foreground or background digital image that is also included as part of the triplet.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: August 18, 2020
    Assignee: Adobe Inc.
    Inventors: Xiaohui Shen, Zhe Lin, Kalyan Krishna Sunkavalli, Hengshuang Zhao, Brian Lynn Price
  • Publication number: 20200250436
    Abstract: Various embodiments describe video object segmentation using a neural network and the training of the neural network. The neural network both detects a target object in the current frame based on a reference frame and a reference mask that define the target object and propagates the segmentation mask of the target object for a previous frame to the current frame to generate a segmentation mask for the current frame. In some embodiments, the neural network is pre-trained using synthetically generated static training images and is then fine-tuned using training videos.
    Type: Application
    Filed: April 23, 2020
    Publication date: August 6, 2020
    Inventors: Joon-Young Lee, Seoungwug Oh, Kalyan Krishna Sunkavalli
  • Publication number: 20200193696
    Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector.
    Type: Application
    Filed: February 25, 2020
    Publication date: June 18, 2020
    Inventors: Jeong Joon Park, Zhili Chen, Xin Sun, Vladimir Kim, Kalyan Krishna Sunkavalli, Duygu Ceylan Aksit
  • Publication number: 20200186714
    Abstract: The present disclosure is directed toward systems and methods for predicting lighting conditions. In particular, the systems and methods described herein analyze a single low-dynamic range digital image to estimate a set of high-dynamic range lighting conditions associated with the single low-dynamic range lighting digital image. Additionally, the systems and methods described herein train a convolutional neural network to extrapolate lighting conditions from a digital image. The systems and methods also augment low-dynamic range information from the single low-dynamic range digital image by using a sky model algorithm to predict high-dynamic range lighting conditions.
    Type: Application
    Filed: February 12, 2020
    Publication date: June 11, 2020
    Inventors: Yannick Hold-Geoffroy, Sunil S. Hadap, Kalyan Krishna Sunkavalli, Emiliano Gambaretto
  • Publication number: 20200184697
    Abstract: Image modification using detected symmetry is described. In example implementations, an image modification module detects multiple local symmetries in an original image by discovering repeated correspondences that are each related by a transformation. The transformation can include a translation, a rotation, a reflection, a scaling, or a combination thereof. Each repeated correspondence includes three patches that are similar to one another and are respectively defined by three pixels of the original image. The image modification module generates a global symmetry of the original image by analyzing an applicability to the multiple local symmetries of multiple candidate homographies contributed by the multiple local symmetries. The image modification module associates individual pixels of the original image with a global symmetry indicator to produce a global symmetry association map.
    Type: Application
    Filed: February 19, 2020
    Publication date: June 11, 2020
    Applicant: Adobe Inc.
    Inventors: Kalyan Krishna Sunkavalli, Nathan Aaron Carr, Michal Lukác, Elya Shechtman
  • Patent number: 10671855
    Abstract: Various embodiments describe video object segmentation using a neural network and the training of the neural network. The neural network both detects a target object in the current frame based on a reference frame and a reference mask that define the target object and propagates the segmentation mask of the target object for a previous frame to the current frame to generate a segmentation mask for the current frame. In some embodiments, the neural network is pre-trained using synthetically generated static training images and is then fine-tuned using training videos.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: June 2, 2020
    Assignee: Adobe Inc.
    Inventors: Joon-Young Lee, Seoungwug Oh, Kalyan Krishna Sunkavalli
  • Patent number: 10609286
    Abstract: The present disclosure is directed toward systems and methods for predicting lighting conditions. In particular, the systems and methods described herein analyze a single low-dynamic range digital image to estimate a set of high-dynamic range lighting conditions associated with the single low-dynamic range lighting digital image. Additionally, the systems and methods described herein train a convolutional neural network to extrapolate lighting conditions from a digital image. The systems and methods also augment low-dynamic range information from the single low-dynamic range digital image by using a sky model algorithm to predict high-dynamic range lighting conditions.
    Type: Grant
    Filed: June 13, 2017
    Date of Patent: March 31, 2020
    Assignee: Adobe Inc.
    Inventors: Yannick Hold-Geoffroy, Sunil S. Hadap, Kalyan Krishna Sunkavalli, Emiliano Gambaretto
  • Patent number: 10600239
    Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector.
    Type: Grant
    Filed: January 22, 2018
    Date of Patent: March 24, 2020
    Assignee: Adobe Inc.
    Inventors: Jeong Joon Park, Zhili Chen, Xin Sun, Vladimir Kim, Kalyan Krishna Sunkavalli, Duygu Ceylan Aksit
  • Patent number: 10573040
    Abstract: Image modification using detected symmetry is described. In example implementations, an image modification module detects multiple local symmetries in an original image by discovering repeated correspondences that are each related by a transformation. The transformation can include a translation, a rotation, a reflection, a scaling, or a combination thereof. Each repeated correspondence includes three patches that are similar to one another and are respectively defined by three pixels of the original image. The image modification module generates a global symmetry of the original image by analyzing an applicability to the multiple local symmetries of multiple candidate homographies contributed by the multiple local symmetries. The image modification module associates individual pixels of the original image with a global symmetry indicator to produce a global symmetry association map.
    Type: Grant
    Filed: November 8, 2016
    Date of Patent: February 25, 2020
    Assignee: Adobe Inc.
    Inventors: Kalyan Krishna Sunkavalli, Nathan Aaron Carr, Michal Lukac, Elya Shechtman
  • Publication number: 20190361994
    Abstract: Compositing aware digital image search techniques and systems are described that leverage machine learning. In one example, a compositing aware image search system employs a two-stream convolutional neural network (CNN) to jointly learn feature embeddings from foreground digital images that capture a foreground object and background digital images that capture a background scene. In order to train models of the convolutional neural networks, triplets of training digital images are used. Each triplet may include a positive foreground digital image and a positive background digital image taken from the same digital image. The triplet also contains a negative foreground or background digital image that is dissimilar to the positive foreground or background digital image that is also included as part of the triplet.
    Type: Application
    Filed: May 22, 2018
    Publication date: November 28, 2019
    Applicant: Adobe Inc.
    Inventors: Xiaohui Shen, Zhe Lin, Kalyan Krishna Sunkavalli, Hengshuang Zhao, Brian Lynn Price
  • Publication number: 20190311202
    Abstract: Various embodiments describe video object segmentation using a neural network and the training of the neural network. The neural network both detects a target object in the current frame based on a reference frame and a reference mask that define the target object and propagates the segmentation mask of the target object for a previous frame to the current frame to generate a segmentation mask for the current frame. In some embodiments, the neural network is pre-trained using synthetically generated static training images and is then fine-tuned using training videos.
    Type: Application
    Filed: April 10, 2018
    Publication date: October 10, 2019
    Inventors: Joon-Young Lee, Seoungwug Oh, Kalyan Krishna Sunkavalli
  • Publication number: 20190228567
    Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector.
    Type: Application
    Filed: January 22, 2018
    Publication date: July 25, 2019
    Inventors: Jeong Joon Park, Zhili Chen, Xin Sun, Vladimir Kim, Kalyan Krishna Sunkavalli, Duygu Ceylan Aksit
  • Publication number: 20190130588
    Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.
    Type: Application
    Filed: December 21, 2018
    Publication date: May 2, 2019
    Inventors: KALYAN KRISHNA SUNKAVALLI, SUNIL HADAP, JOON-YOUNG LEE, ZHUO HUI
  • Patent number: 10181199
    Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.
    Type: Grant
    Filed: May 8, 2017
    Date of Patent: January 15, 2019
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Kalyan Krishna Sunkavalli, Sunil Hadap, Joon-Young Lee, Zhuo Hui
  • Publication number: 20180359416
    Abstract: The present disclosure is directed toward systems and methods for predicting lighting conditions. In particular, the systems and methods described herein analyze a single low-dynamic range digital image to estimate a set of high-dynamic range lighting conditions associated with the single low-dynamic range lighting digital image. Additionally, the systems and methods described herein train a convolutional neural network to extrapolate lighting conditions from a digital image. The systems and methods also augment low-dynamic range information from the single low-dynamic range digital image by using a sky model algorithm to predict high-dynamic range lighting conditions.
    Type: Application
    Filed: June 13, 2017
    Publication date: December 13, 2018
    Inventors: Yannick Hold-Geoffroy, Sunil S. Hadap, Kalyan Krishna Sunkavalli, Emiliano Gambaretto
  • Publication number: 20180322644
    Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.
    Type: Application
    Filed: May 8, 2017
    Publication date: November 8, 2018
    Inventors: KALYAN KRISHNA SUNKAVALLI, SUNIL HADAP, JOON-YOUNG LEE, ZHUO HUI
  • Patent number: 10116897
    Abstract: Photometric stabilization for time-compressed video is described. Initially, video content captured by a video capturing device is time-compressed by selecting a subset of frames from the video content according to a frame sampling technique. Photometric characteristics are then stabilized across the frames of the time-compressed video. This involves determining correspondences of pixels in adjacent frames of the time-compressed video. Photometric transformations are then determined that describe how photometric characteristics (e.g., one or both of luminance and chrominance) change between the adjacent frames, given movement of objects through the captured scene. Based on the determined photometric transformations, filters are computed for smoothing photometric characteristic changes across the time-compressed video. Photometrically stabilized time-compressed video is generated from the time-compressed video by using the filters to smooth the photometric characteristic changes.
    Type: Grant
    Filed: March 1, 2017
    Date of Patent: October 30, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Joon-Young Lee, Zhaowen Wang, Xuaner Zhang, Kalyan Krishna Sunkavalli
  • Publication number: 20180255273
    Abstract: Photometric stabilization for time-compressed video is described. Initially, video content captured by a video capturing device is time-compressed by selecting a subset of frames from the video content according to a frame sampling technique. Photometric characteristics are then stabilized across the frames of the time-compressed video. This involves determining correspondences of pixels in adjacent frames of the time-compressed video. Photometric transformations are then determined that describe how photometric characteristics (e.g., one or both of luminance and chrominance) change between the adjacent frames, given movement of objects through the captured scene. Based on the determined photometric transformations, filters are computed for smoothing photometric characteristic changes across the time-compressed video. Photometrically stabilized time-compressed video is generated from the time-compressed video by using the filters to smooth the photometric characteristic changes.
    Type: Application
    Filed: March 1, 2017
    Publication date: September 6, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Joon-Young Lee, Zhaowen Wang, Xuaner Zhang, Kalyan Krishna Sunkavalli
  • Publication number: 20180130241
    Abstract: Image modification using detected symmetry is described. In example implementations, an image modification module detects multiple local symmetries in an original image by discovering repeated correspondences that are each related by a transformation. The transformation can include a translation, a rotation, a reflection, a scaling, or a combination thereof. Each repeated correspondence includes three patches that are similar to one another and are respectively defined by three pixels of the original image. The image modification module generates a global symmetry of the original image by analyzing an applicability to the multiple local symmetries of multiple candidate homographies contributed by the multiple local symmetries. The image modification module associates individual pixels of the original image with a global symmetry indicator to produce a global symmetry association map.
    Type: Application
    Filed: November 8, 2016
    Publication date: May 10, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Kalyan Krishna Sunkavalli, Nathan Aaron Carr, Michal Lukác, Elya Shechtman