Patents by Inventor Kalyan Sunkavalli

Kalyan Sunkavalli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10665011
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to render a virtual object in a digital scene by using a local-lighting-estimation-neural network to analyze both global and local features of the digital scene and generate location-specific-lighting parameters for a designated position within the digital scene. For example, the disclosed systems extract and combine such global and local features from a digital scene using global network layers and local network layers of the local-lighting-estimation-neural network. In certain implementations, the disclosed systems can generate location-specific-lighting parameters using a neural-network architecture that combines global and local feature vectors to spatially vary lighting for different positions within a digital scene.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: May 26, 2020
    Assignees: ADOBE INC., UNIVERSITÉ LAVAL
    Inventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Jean-Francois Lalonde, Mathieu Garon
  • Publication number: 20200134834
    Abstract: Systems and techniques for automatic object replacement in an image include receiving an original image and a preferred image. The original image is automatically segmented into an original image foreground region and an original image object region. The preferred image is automatically segmented into a preferred image foreground region and a preferred image object region. A composite image is automatically composed by replacing the original image object region with the preferred image object region such that the composite image includes the original image foreground region and the preferred image object region. An attribute of the composite image is automatically adjusted.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Inventors: I-Ming Pao, Sarah Aye Kong, Alan Lee Erickson, Kalyan Sunkavalli, Hyunghwan Byun
  • Publication number: 20200090389
    Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.
    Type: Application
    Filed: November 7, 2019
    Publication date: March 19, 2020
    Applicant: Adobe Inc.
    Inventors: Sunil Hadap, Elya Shechtman, Zhixin Shu, Kalyan Sunkavalli, Mehmet Yumer
  • Publication number: 20200074600
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Application
    Filed: November 8, 2019
    Publication date: March 5, 2020
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Patent number: 10565758
    Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.
    Type: Grant
    Filed: June 14, 2017
    Date of Patent: February 18, 2020
    Assignee: Adobe Inc.
    Inventors: Sunil Hadap, Elya Shechtman, Zhixin Shu, Kalyan Sunkavalli, Mehmet Yumer
  • Patent number: 10546212
    Abstract: The present disclosure is directed toward systems and methods for image patch matching. In particular, the systems and methods described herein sample image patches to identify those image patches that match a target image patch. The systems and methods described herein probabilistically accept image patch proposals as potential matches based on an oracle. The oracle is computationally inexpensive to evaluate but more approximate than similarity heuristics. The systems and methods use the oracle to quickly guide the search to areas of the search space more likely to have a match. Once areas are identified that likely include a match, the systems and methods use a more accurate similarity function to identify patch matches.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: January 28, 2020
    Assignee: Adobe Inc.
    Inventors: Nathan Carr, Kalyan Sunkavalli, Michal Lukac, Elya Shechtman
  • Patent number: 10489676
    Abstract: The present disclosure is directed toward systems and methods for image patch matching. In particular, the systems and methods described herein sample image patches to identify those image patches that match a target image patch. The systems and methods described herein probabilistically accept image patch proposals as potential matches based on an oracle. The oracle is computationally inexpensive to evaluate but more approximate than similarity heuristics. The systems and methods use the oracle to quickly guide the search to areas of the search space more likely to have a match. Once areas are identified that likely include a match, the systems and methods use a more accurate similarity function to identify patch matches.
    Type: Grant
    Filed: November 3, 2016
    Date of Patent: November 26, 2019
    Assignee: Adobe Inc.
    Inventors: Nathan Carr, Kalyan Sunkavalli, Michal Lukac, Elya Shechtman
  • Publication number: 20190347526
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for extracting material properties from a single digital image portraying one or more materials by utilizing a neural network encoder, a neural network material classifier, and one or more neural network material property decoders. In particular, in one or more embodiments, the disclosed systems and methods train the neural network encoder, the neural network material classifier, and one or more neural network material property decoders to accurately extract material properties from a single digital image portraying one or more materials. Furthermore, in one or more embodiments, the disclosed systems and methods train and utilize a rendering layer to generate model images from the extracted material properties.
    Type: Application
    Filed: May 9, 2018
    Publication date: November 14, 2019
    Inventors: Kalyan Sunkavalli, Zhengqin Li, Manmohan Chandraker
  • Patent number: 10475169
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: November 12, 2019
    Assignee: Adobe Inc.
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Publication number: 20190340810
    Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.
    Type: Application
    Filed: May 3, 2018
    Publication date: November 7, 2019
    Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
  • Patent number: 10467777
    Abstract: Texture modeling techniques for image data are described. In one or more implementations, texels in image data are discovered by one or more computing devices, each texel representing an element that repeats to form a texture pattern in the image data. Regularity of the texels in the image data is modeled by the one or more computing devices to define translations and at least one other transformation of texels in relation to each other.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: November 5, 2019
    Assignee: Adobe Inc.
    Inventors: Siying Liu, Kalyan Sunkavalli, Nathan A. Carr, Elya Shechtman
  • Publication number: 20190164261
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Application
    Filed: November 28, 2017
    Publication date: May 30, 2019
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Publication number: 20190042875
    Abstract: The present disclosure is directed toward systems and methods for image patch matching. In particular, the systems and methods described herein sample image patches to identify those image patches that match a target image patch. The systems and methods described herein probabilistically accept image patch proposals as potential matches based on an oracle. The oracle is computationally inexpensive to evaluate but more approximate than similarity heuristics. The systems and methods use the oracle to quickly guide the search to areas of the search space more likely to have a match. Once areas are identified that likely include a match, the systems and methods use a more accurate similarity function to identify patch matches.
    Type: Application
    Filed: October 1, 2018
    Publication date: February 7, 2019
    Inventors: Nathan Carr, Kalyan Sunkavalli, Michal Lukac, Elya Shechtman
  • Publication number: 20180365874
    Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.
    Type: Application
    Filed: June 14, 2017
    Publication date: December 20, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Sunil Hadap, Elya Shechtman, Zhixin Shu, Kalyan Sunkavalli, Mehmet Yumer
  • Patent number: 10074033
    Abstract: Certain embodiments involve using labels to track high-frequency offsets for patch-matching. For example, a processor identifies an offset between a first source image patch and a first target image patch. If the first source image patch and the first target image patch are sufficiently similar, the processor updates a data structure to include a label specifying the offset. The processor associates, via the data structure, the first source image patch with the label. The processor subsequently selects certain high-frequency offsets, including the identified offset, from frequently occurring offsets in the data structure. The processor uses these offsets to identify a second target image patch, which is located at the identified offset from a second source image patch. The processor associates, via the data structure, the second source image patch with the identified offset based on a sufficient similarity between the second source image patch and the second target image patch.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: September 11, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Nathan Carr, Kalyan Sunkavalli, Michal Lukac, Elya Shechtman
  • Publication number: 20180211415
    Abstract: Texture modeling techniques for image data are described. In one or more implementations, texels in image data are discovered by one or more computing devices, each texel representing an element that repeats to form a texture pattern in the image data. Regularity of the texels in the image data is modeled by the one or more computing devices to define translations and at least one other transformation of texels in relation to each other.
    Type: Application
    Filed: March 23, 2018
    Publication date: July 26, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Siying Liu, Kalyan Sunkavalli, Nathan A. Carr, Elya Shechtman
  • Publication number: 20180121754
    Abstract: The present disclosure is directed toward systems and methods for image patch matching. In particular, the systems and methods described herein sample image patches to identify those image patches that match a target image patch. The systems and methods described herein probabilistically accept image patch proposals as potential matches based on an oracle. The oracle is computationally inexpensive to evaluate but more approximate than similarity heuristics. The systems and methods use the oracle to quickly guide the search to areas of the search space more likely to have a match. Once areas are identified that likely include a match, the systems and methods use a more accurate similarity function to identify patch matches.
    Type: Application
    Filed: November 3, 2016
    Publication date: May 3, 2018
    Inventors: Nathan Carr, Kalyan Sunkavalli, Michal Lukac, Elya Shechtman
  • Publication number: 20180101942
    Abstract: Certain embodiments involve using labels to track high-frequency offsets for patch-matching. For example, a processor identifies an offset between a first source image patch and a first target image patch. If the first source image patch and the first target image patch are sufficiently similar, the processor updates a data structure to include a label specifying the offset. The processor associates, via the data structure, the first source image patch with the label. The processor subsequently selects certain high-frequency offsets, including the identified offset, from frequently occurring offsets in the data structure. The processor uses these offsets to identify a second target image patch, which is located at the identified offset from a second source image patch. The processor associates, via the data structure, the second source image patch with the identified offset based on a sufficient similarity between the second source image patch and the second target image patch.
    Type: Application
    Filed: October 6, 2016
    Publication date: April 12, 2018
    Inventors: Nathan Carr, Kalyan Sunkavalli, Michal Lukac, Elya Shechtman
  • Patent number: 9892542
    Abstract: This disclosure relates to generating a bump map and/or a normal map from an image. For example, a method for generating a bump map includes receiving a texture image and a plurality of user-specified weights. The method further includes deriving a plurality of images from the texture image, the plurality of images vary from one another with respect to resolution or sharpness. The method further includes weighting individual images of the plurality of images according to the user-specified weights. The method further includes generating a bump map using the weighted individual images. The method further includes providing an image for display with texture added to a surface of an object in the image based on the bump map.
    Type: Grant
    Filed: November 19, 2015
    Date of Patent: February 13, 2018
    Assignee: Adobe Systems Incorporated
    Inventor: Kalyan Sunkavalli
  • Patent number: 9679192
    Abstract: Systems and methods are disclosed herein for 3-Dimensional portrait reconstruction from a single photo. A face portion of a person depicted in a portrait photo is detected and a 3-Dimensional model of the person depicted in the portrait photo constructed. In one embodiment, constructing the 3-Dimensional model involves fitting hair portions of the portrait photo to one or more helices. In another embodiment, constructing the 3-Dimensional model involves applying positional and normal boundary conditions determined based on one or more relationships between face portion shape and hair portion shape. In yet another embodiment, constructing the 3-Dimensional model involves using shape from shading to capture fine-scale details in a form of surface normals, the shape from shading based on an adaptive albedo model and/or a lighting condition estimated based on shape fitting the face portion.
    Type: Grant
    Filed: April 24, 2015
    Date of Patent: June 13, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Linjie Luo, Sunil Hadap, Nathan Carr, Kalyan Sunkavalli, Menglei Chai