Patents by Inventor Jonathan EISENMANN

Jonathan EISENMANN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11972534
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a visual neural network to replace materials in a three-dimensional scene with visually similar materials from a source dataset. Specifically, the disclosed system utilizes the visual neural network to generate source deep visual features representing source texture maps from materials in a plurality of source materials. Additionally, the disclosed system utilizes the visual neural network to generate deep visual features representing texture maps from materials in a digital scene. The disclosed system then determines source texture maps that are visually similar to the texture maps of the digital scene based on visual similarity metrics that compare the source deep visual features and the deep visual features. Additionally, the disclosed system modifies the digital scene by replacing one or more of the texture maps in the digital scene with the visually similar source texture maps.
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: April 30, 2024
    Assignee: Adobe Inc.
    Inventors: Maxine Perroni-Scharf, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Jonathan Eisenmann
  • Publication number: 20240127402
    Abstract: In some examples, a computing system accesses a field of view (FOV) image that has a field of view less than 360 degrees and has low dynamic range (LDR) values. The computing system estimates lighting parameters from a scene depicted in the FOV image and generates a lighting image based on the lighting parameters. The computing system further generates lighting features generated the lighting image and image features generated from the FOV image. These features are aggregated into aggregated features and a machine learning model is applied to the image features and the aggregated features to generate a panorama image having high dynamic range (HDR) values.
    Type: Application
    Filed: August 25, 2023
    Publication date: April 18, 2024
    Inventors: Mohammad Reza Karimi Dastjerdi, Yannick Hold-Geoffroy, Sai Bi, Jonathan Eisenmann, Jean-François Lalonde
  • Publication number: 20230360170
    Abstract: Embodiments are disclosed for generating 360-degree panoramas from input narrow field of view images. A method of generating 360-degree panoramas may include obtaining an input image and guide, generating a panoramic projection of the input image, and generating, by a panorama generator, a 360-degree panorama based on the panoramic projection and the guide, wherein the panorama generator is a guided co-modulation generator network trained to generate a 360-degree panorama from the input image based on the guide.
    Type: Application
    Filed: November 15, 2022
    Publication date: November 9, 2023
    Applicant: Adobe Inc.
    Inventors: Mohammad Reza KARIMI DASTJERDI, Yannick Hold-Geoffroy, Vladimir KIM, Jonathan EISENMANN, Jean-François LALONDE
  • Patent number: 11810326
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.
    Type: Grant
    Filed: July 28, 2021
    Date of Patent: November 7, 2023
    Assignee: Adobe Inc.
    Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman
  • Patent number: 11694416
    Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: July 4, 2023
    Assignee: Adobe, Inc.
    Inventors: Duygu Ceylan Aksit, Vladimir Kim, Siddhartha Chaudhuri, Radomir Mech, Noam Aigerman, Kevin Wampler, Jonathan Eisenmann, Giorgio Gori, Emiliano Gambaretto
  • Publication number: 20230141395
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a visual neural network to replace materials in a three-dimensional scene with visually similar materials from a source dataset. Specifically, the disclosed system utilizes the visual neural network to generate source deep visual features representing source texture maps from materials in a plurality of source materials. Additionally, the disclosed system utilizes the visual neural network to generate deep visual features representing texture maps from materials in a digital scene. The disclosed system then determines source texture maps that are visually similar to the texture maps of the digital scene based on visual similarity metrics that compare the source deep visual features and the deep visual features. Additionally, the disclosed system modifies the digital scene by replacing one or more of the texture maps in the digital scene with the visually similar source texture maps.
    Type: Application
    Filed: November 5, 2021
    Publication date: May 11, 2023
    Inventors: Maxine Perroni-Scharf, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Jonathan Eisenmann
  • Patent number: 11443412
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: September 13, 2022
    Assignee: ADOBE INC.
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Publication number: 20220148135
    Abstract: A plurality of pixel-based sampling points are identified within an image, wherein sampling points of a pixel are distributed within the pixel. For individual sampling points of individual pixels, a corresponding radiance vector is estimated. A radiance vector includes one or more radiance values characterizing light received at a sampling point. A first machine learning module generates, for each pixel, a corresponding intermediate radiance feature vector, based on the radiance vectors associated with the sampling points within that pixel. A second machine learning module generates, for each pixel, a corresponding final radiance feature vector, based on an intermediate radiance feature vector for that pixel, and one or more other intermediate radiance feature vectors for one or more other pixels neighboring that pixel. One or more kernels are generated, based on the final radiance feature vectors, and applied to corresponding pixels of the image, to generate a lower noise image.
    Type: Application
    Filed: November 10, 2020
    Publication date: May 12, 2022
    Applicant: Adobe Inc.
    Inventors: Mustafa Isik, Michael Yanis Gharbi, Matthew David Fisher, Krishna Bhargava Mullia Lakshminarayana, Jonathan Eisenmann, Federico Perazzi
  • Patent number: 11276150
    Abstract: In some embodiments, an image manipulation application receives a two-dimensional background image and projects the background image onto a sphere to generate a sphere image. Based on the sphere image, an unfilled environment map containing a hole area lacking image content can be generated. A portion of the unfilled environment map can be projected to an unfilled projection image using a map projection. The unfilled projection image contains the hole area. A hole filling model is applied to the unfilled projection image to generate a filled projection image containing image content for the hole area. A filled environment map can be generated by applying an inverse projection of the map projection on the filled projection image and by combining the unfilled environment map with the generated image content for the hole area of the environment map.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: March 15, 2022
    Assignee: Adobe Inc.
    Inventors: Jonathan Eisenmann, Zhe Lin, Matthew Fisher
  • Publication number: 20210358170
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.
    Type: Application
    Filed: July 28, 2021
    Publication date: November 18, 2021
    Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman
  • Publication number: 20210256775
    Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.
    Type: Application
    Filed: March 22, 2021
    Publication date: August 19, 2021
    Inventors: Duygu Ceylan Aksit, Vladimir Kim, Siddhartha Chaudhuri, Radomir Mech, Noam Aigerman, Kevin Wampler, Jonathan Eisenmann, Giorgio Gori, Emiliano Gambaretto
  • Patent number: 11094083
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: August 17, 2021
    Assignee: ADOBE INC.
    Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman
  • Patent number: 10991085
    Abstract: Embodiments herein describe a framework for classifying images. In some embodiments, it is determined whether an image includes synthetic image content. If it does, characteristics of the image are analyzed to determine if the image includes characteristics particular to panoramic images (e.g., possess a threshold equivalency of pixel values among the top and/or bottom boundaries of the image, or a difference between summed pixel values of the pixels comprising the right vertical boundary of the image and summed pixel values of the pixels comprising the left vertical boundary of the image being less than or equal to a threshold value). If the image includes characteristics particular to panoramic images, the image is classified as a synthetic panoramic image. If the image is determined to not include synthetic image content, a neural network is applied to the image and the image is classified as one of non-synthetic panoramic or non-synthetic non-panoramic.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: April 27, 2021
    Assignee: ADOBE INC.
    Inventors: Qi Sun, Li-Yi Wei, Joon-Young Lee, Jonathan Eisenmann, Jinwoong Jung, Byungmoon Kim
  • Patent number: 10964060
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: March 30, 2021
    Assignee: ADOBE INC.
    Inventors: Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Matthew David Fisher, Jonathan Eisenmann, Emiliano Gambaretto
  • Patent number: 10957117
    Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: March 23, 2021
    Assignee: Adobe Inc.
    Inventors: Duygu Ceylan Aksit, Vladimir Kim, Siddhartha Chaudhuri, Radomir Mech, Noam Aigerman, Kevin Wampler, Jonathan Eisenmann, Giorgio Gori, Emiliano Gambaretto
  • Patent number: 10957026
    Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate high-dynamic range lighting parameters for input low-dynamic range images. The high-dynamic range lighting parameters can be based on sky color, sky turbidity, sun color, sun shape, and sun position. Such input low-dynamic range images can be low-dynamic range panorama images or low-dynamic range standard images. Such a neural network system can apply the estimates high-dynamic range lighting parameters to objects added to the low-dynamic range images.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: March 23, 2021
    Assignee: Adobe Inc.
    Inventors: Jinsong Zhang, Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Jonathan Eisenmann, Jean-Francois Lalonde
  • Publication number: 20210073955
    Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate high-dynamic range lighting parameters for input low-dynamic range images. The high-dynamic range lighting parameters can be based on sky color, sky turbidity, sun color, sun shape, and sun position. Such input low-dynamic range images can be low-dynamic range panorama images or low-dynamic range standard images. Such a neural network system can apply the estimates high-dynamic range lighting parameters to objects added to the low-dynamic range images.
    Type: Application
    Filed: September 9, 2019
    Publication date: March 11, 2021
    Inventors: Jinsong Zhang, Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Jonathan Eisenmann, Jean-Francois Lalonde
  • Patent number: 10936909
    Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate lighting parameters for input images where the input images are synthetic and real low-dynamic range images. Such a neural network system can be trained using differences between a simple scene rendered using the estimated lighting parameters and the same simple scene rendered using known ground-truth lighting parameters. Such a neural network system can also be trained such that the synthetic and real low-dynamic range images are mapped in roughly the same distribution. Such a trained neural network system can be used to input a low-dynamic range image determine high-dynamic range lighting parameters.
    Type: Grant
    Filed: November 12, 2018
    Date of Patent: March 2, 2021
    Assignee: Adobe Inc.
    Inventors: Kalyan K. Sunkavalli, Sunil Hadap, Jonathan Eisenmann, Jinsong Zhang, Emiliano Gambaretto
  • Patent number: 10831333
    Abstract: The present disclosure is directed toward systems and methods for manipulating a camera perspective within a digital environment for rendering three-dimensional objects against a background digital image. In particular, the systems and methods described herein display a view of a three-dimensional space including a horizon, a ground plane, and a three-dimensional object in accordance with a camera perspective of the three-dimensional space. The systems and methods further manipulate the camera perspective in response to, and in accordance with, user interaction with one or more options. The systems and methods manipulate the camera perspective relative to the three-dimensional space and thereby change the view of the three-dimensional space within a user interface.
    Type: Grant
    Filed: July 26, 2017
    Date of Patent: November 10, 2020
    Assignee: ADOBE INC.
    Inventors: Jonathan Eisenmann, Bushra Mahmood
  • Publication number: 20200311901
    Abstract: Embodiments herein describe a framework for classifying images. In some embodiments, it is determined whether an image includes synthetic image content. If it does, characteristics of the image are analyzed to determine if the image includes characteristics particular to panoramic images (e.g., possess a threshold equivalency of pixel values among the top and/or bottom boundaries of the image, or a difference between summed pixel values of the pixels comprising the right vertical boundary of the image and summed pixel values of the pixels comprising the left vertical boundary of the image being less than or equal to a threshold value). If the image includes characteristics particular to panoramic images, the image is classified as a synthetic panoramic image. If the image is determined to not include synthetic image content, a neural network is applied to the image and the image is classified as one of non-synthetic panoramic or non-synthetic non-panoramic.
    Type: Application
    Filed: April 1, 2019
    Publication date: October 1, 2020
    Inventors: Qi Sun, Li-Yi Wei, Joon-Young Lee, Jonathan Eisenmann, Jinwoong Jung, Byungmoon Kim