Patents by Inventor Sunil Hadap
Sunil Hadap has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11682126Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.Type: GrantFiled: October 26, 2020Date of Patent: June 20, 2023Assignee: ADOBE Inc.Inventors: Kalyan Krishna Sunkavalli, Sunil Hadap, Joon-Young Lee, Zhuo Hui
-
Patent number: 11257284Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.Type: GrantFiled: May 13, 2020Date of Patent: February 22, 2022Assignee: Adobe Inc.Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
-
Patent number: 11158117Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene.Type: GrantFiled: May 18, 2020Date of Patent: October 26, 2021Assignee: Adobe Inc.Inventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Mathieu Garon
-
Patent number: 10964060Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters.Type: GrantFiled: November 6, 2019Date of Patent: March 30, 2021Assignee: ADOBE INC.Inventors: Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Matthew David Fisher, Jonathan Eisenmann, Emiliano Gambaretto
-
Patent number: 10957026Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate high-dynamic range lighting parameters for input low-dynamic range images. The high-dynamic range lighting parameters can be based on sky color, sky turbidity, sun color, sun shape, and sun position. Such input low-dynamic range images can be low-dynamic range panorama images or low-dynamic range standard images. Such a neural network system can apply the estimates high-dynamic range lighting parameters to objects added to the low-dynamic range images.Type: GrantFiled: September 9, 2019Date of Patent: March 23, 2021Assignee: Adobe Inc.Inventors: Jinsong Zhang, Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Jonathan Eisenmann, Jean-Francois Lalonde
-
Patent number: 10950037Abstract: Embodiments are generally directed to generating novel images of an object having a novel viewpoint and a novel lighting direction based on sparse images of the object. A neural network is trained with training images rendered from a 3D model. Utilizing the 3D model, training images, ground truth predictive images from particular viewpoint(s), and ground truth predictive depth maps of the ground truth predictive images, can be easily generated and fed back through the neural network for training. Once trained, the neural network can receive a sparse plurality of images of an object, a novel viewpoint, and a novel lighting direction. The neural network can generate a plane sweep volume based on the sparse plurality of images, and calculate depth probabilities for each pixel in the plane sweep volume. A predictive output image of the object, having the novel viewpoint and novel lighting direction, can be generated and output.Type: GrantFiled: July 12, 2019Date of Patent: March 16, 2021Assignee: ADOBE INC.Inventors: Kalyan K. Sunkavalli, Zexiang Xu, Sunil Hadap
-
Publication number: 20210073955Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate high-dynamic range lighting parameters for input low-dynamic range images. The high-dynamic range lighting parameters can be based on sky color, sky turbidity, sun color, sun shape, and sun position. Such input low-dynamic range images can be low-dynamic range panorama images or low-dynamic range standard images. Such a neural network system can apply the estimates high-dynamic range lighting parameters to objects added to the low-dynamic range images.Type: ApplicationFiled: September 9, 2019Publication date: March 11, 2021Inventors: Jinsong Zhang, Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Jonathan Eisenmann, Jean-Francois Lalonde
-
Patent number: 10936909Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate lighting parameters for input images where the input images are synthetic and real low-dynamic range images. Such a neural network system can be trained using differences between a simple scene rendered using the estimated lighting parameters and the same simple scene rendered using known ground-truth lighting parameters. Such a neural network system can also be trained such that the synthetic and real low-dynamic range images are mapped in roughly the same distribution. Such a trained neural network system can be used to input a low-dynamic range image determine high-dynamic range lighting parameters.Type: GrantFiled: November 12, 2018Date of Patent: March 2, 2021Assignee: Adobe Inc.Inventors: Kalyan K. Sunkavalli, Sunil Hadap, Jonathan Eisenmann, Jinsong Zhang, Emiliano Gambaretto
-
Publication number: 20210042944Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.Type: ApplicationFiled: October 26, 2020Publication date: February 11, 2021Inventors: KALYAN KRISHNA SUNKAVALLI, SUNIL HADAP, JOON-YOUNG LEE, ZHUO HUI
-
Publication number: 20210012561Abstract: Embodiments are generally directed to generating novel images of an object having a novel viewpoint and a novel lighting direction based on sparse images of the object. A neural network is trained with training images rendered from a 3D model. Utilizing the 3D model, training images, ground truth predictive images from particular viewpoint(s), and ground truth predictive depth maps of the ground truth predictive images, can be easily generated and fed back through the neural network for training. Once trained, the neural network can receive a sparse plurality of images of an object, a novel viewpoint, and a novel lighting direction. The neural network can generate a plane sweep volume based on the sparse plurality of images, and calculate depth probabilities for each pixel in the plane sweep volume. A predictive output image of the object, having the novel viewpoint and novel lighting direction, can be generated and output.Type: ApplicationFiled: July 12, 2019Publication date: January 14, 2021Inventors: Kalyan K. Sunkavalli, Zexiang Xu, Sunil Hadap
-
Patent number: 10818022Abstract: Methods and systems are provided for performing material capture to determine properties of an imaged surface. A plurality of images can be received depicting a material surface. The plurality of images can be calibrated to align corresponding pixels of the images and determine reflectance information for at least a portion of the aligned pixels. After calibration, a set of reference materials from a material library can be selected using the calibrated images. The set of reference materials can be used to determine a material model that accurately represents properties of the material surface.Type: GrantFiled: December 21, 2018Date of Patent: October 27, 2020Assignee: ADOBE INC.Inventors: Kalyan Krishna Sunkavalli, Sunil Hadap, Joon-Young Lee, Zhuo Hui
-
Publication number: 20200302684Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene.Type: ApplicationFiled: May 18, 2020Publication date: September 24, 2020Inventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Mathieu Garon
-
Publication number: 20200273237Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.Type: ApplicationFiled: May 13, 2020Publication date: August 27, 2020Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
-
Patent number: 10692276Abstract: The present disclosure relates to using an object relighting neural network to generate digital images portraying objects under target lighting directions based on sets of digital images portraying the objects under other lighting directions. For example, in one or more embodiments, the disclosed systems provide a sparse set of input digital images and a target lighting direction to an object relighting neural network. The disclosed systems then utilize the object relighting neural network to generate a target digital image that portrays the object illuminated by the target lighting direction. Using a plurality of target digital images, each portraying a different target lighting direction, the disclosed systems can also generate a modified digital image portraying the object illuminated by a target lighting configuration that comprises a combination of the different target lighting directions.Type: GrantFiled: May 3, 2018Date of Patent: June 23, 2020Assignee: ADOBE INC.Inventors: Kalyan Sunkavalli, Zexiang Xu, Sunil Hadap
-
Patent number: 10692265Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.Type: GrantFiled: November 7, 2019Date of Patent: June 23, 2020Assignee: Adobe Inc.Inventors: Sunil Hadap, Elya Shechtman, Zhixin Shu, Kalyan Sunkavalli, Mehmet Yumer
-
Patent number: 10692277Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene.Type: GrantFiled: March 21, 2019Date of Patent: June 23, 2020Assignee: ADOBE INC.Inventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Mathieu Garon
-
Patent number: 10665011Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to render a virtual object in a digital scene by using a local-lighting-estimation-neural network to analyze both global and local features of the digital scene and generate location-specific-lighting parameters for a designated position within the digital scene. For example, the disclosed systems extract and combine such global and local features from a digital scene using global network layers and local network layers of the local-lighting-estimation-neural network. In certain implementations, the disclosed systems can generate location-specific-lighting parameters using a neural-network architecture that combines global and local feature vectors to spatially vary lighting for different positions within a digital scene.Type: GrantFiled: May 31, 2019Date of Patent: May 26, 2020Assignees: ADOBE INC., UNIVERSITÉ LAVALInventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Jean-Francois Lalonde, Mathieu Garon
-
Publication number: 20200151509Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate lighting parameters for input images where the input images are synthetic and real low-dynamic range images. Such a neural network system can be trained using differences between a simple scene rendered using the estimated lighting parameters and the same simple scene rendered using known ground-truth lighting parameters. Such a neural network system can also be trained such that the synthetic and real low-dynamic range images are mapped in roughly the same distribution. Such a trained neural network system can be used to input a low-dynamic range image determine high-dynamic range lighting parameters.Type: ApplicationFiled: November 12, 2018Publication date: May 14, 2020Inventors: Kalyan K. Sunkavalli, Sunil Hadap, Jonathan Eisenmann, Jinsong Zhang, Emiliano Gambaretto
-
Publication number: 20200090389Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.Type: ApplicationFiled: November 7, 2019Publication date: March 19, 2020Applicant: Adobe Inc.Inventors: Sunil Hadap, Elya Shechtman, Zhixin Shu, Kalyan Sunkavalli, Mehmet Yumer
-
Publication number: 20200074682Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters.Type: ApplicationFiled: November 6, 2019Publication date: March 5, 2020Inventors: Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Matthew David Fisher, Jonathan Eisenmann, Emiliano Gambaretto