Patents by Inventor Marc-Andre Gardner
Marc-Andre Gardner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240119751Abstract: In example embodiments, techniques are provided that use two different ML models (a symbol association ML model and a link association ML model), one to extract associations between text labels and one to extract associations between symbols and links, in a schematic diagram (e.g., P&ID) in an image-only format. The two models may use different ML architectures. For example, the symbol association ML model may use a deep learning neural network architecture that receives for each possible text label and symbol pair both a context and a request, and produces a score indicating confidence the pair is associated. The link association ML model may use a gradient boosting tree architecture that receives for each possible text label and link pair a set of multiple features describing at least the geometric relationship between the possible text label and link pair and produces a score indicating confidence the pair is associated.Type: ApplicationFiled: October 6, 2022Publication date: April 11, 2024Inventors: Marc-Andre Gardner, Simon Savary, Louis-Philippe Asselin
-
Patent number: 11842035Abstract: In example embodiments, techniques are provided for efficiently labeling, reviewing and correcting predictions for P&IDs in image-only formats. To label text boxes in the P&ID, the labeling application executes an OCR algorithm to predict a bounding box around, and machine-readable text within, each text box, and displays these predictions in its user interface. The labeling application provides functionality to receive a user confirmation or correction for each predicted bounding box and predicted machine-readable text. To label symbols in the P&ID, the labeling application receives user input to draw bounding boxes around symbols and assign symbols to classes of equipment. Where there are multiple occurrences of specific symbols, the labeling application provides functionality to duplicate and automatically detect and assign bounding boxes and classes.Type: GrantFiled: December 21, 2020Date of Patent: December 12, 2023Assignee: Bentley Systems, IncorporatedInventors: Karl-Alexandre Jahjah, Marc-André Gardner
-
Publication number: 20230245382Abstract: An automated and dynamic method and system are provided for estimating lighting conditions of a scene captured from a plurality of digital images. The method comprises generating 3D-source-specific-lighting parameters of the scene using a lighting-estimation neural network configured for: extracting from the plurality of images a corresponding number of latent feature vectors; transforming the latent feature vectors into common-coordinates latent feature vectors; merging the plurality of common-coordinates latent feature vectors into a single latent feature vector; and extracting, from the single latent feature vector, 3D-source-specific-lighting parameters of the scene.Type: ApplicationFiled: June 14, 2021Publication date: August 3, 2023Inventors: Marc-Andre GARDNER, Jean-François LALONDE, Christian GAGNE
-
Publication number: 20230098115Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.Type: ApplicationFiled: December 6, 2022Publication date: March 30, 2023Inventors: Kalyan Sunkavalli, Yannick Hold-Geoffroy, Christian Gagne, Marc-Andre Gardner, Jean-Francois Lalonde
-
Patent number: 11538216Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.Type: GrantFiled: September 3, 2019Date of Patent: December 27, 2022Assignee: Adobe Inc.Inventors: Kalyan Sunkavalli, Yannick Hold-Geoffroy, Christian Gagne, Marc-Andre Gardner, Jean-Francois Lalonde
-
Patent number: 11443412Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.Type: GrantFiled: November 8, 2019Date of Patent: September 13, 2022Assignee: ADOBE INC.Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
-
Publication number: 20220043547Abstract: In example embodiments, techniques are provided for efficiently labeling, reviewing and correcting predictions for P&IDs in image-only formats. To label text boxes in the P&ID, the labeling application executes an OCR algorithm to predict a bounding box around, and machine-readable text within, each text box, and displays these predictions in its user interface. The labeling application provides functionality to receive a user confirmation or correction for each predicted bounding box and predicted machine-readable text. To label symbols in the P&ID, the labeling application receives user input to draw bounding boxes around symbols and assign symbols to classes of equipment. Where there are multiple occurrences of specific symbols, the labeling application provides functionality to duplicate and automatically detect and assign bounding boxes and classes.Type: ApplicationFiled: December 21, 2020Publication date: February 10, 2022Inventors: Karl-Alexandre Jahjah, Marc-André Gardner
-
Publication number: 20220044146Abstract: In example embodiments, techniques are provided for using machine learning to extract machine-readable labels for text boxes and symbols in P&IDs in image-only formats. A P&ID data extraction application uses an optical character recognition (OCR) algorithm to predict labels for text boxes in a P&ID. The P&ID data extraction application uses a first machine learning algorithm to detect symbols in the P&ID and return a predicted bounding box and predicted class of equipment for each symbol. One or more of the predicted bounding boxes may be decimate by non-maximum suppression to avoid overlapping detections. The P&ID data extraction application uses a second machine learning algorithm to infer properties for each detected symbol having a remaining predicted bounding box. The P&ID data extraction application stores the predicted bounding box and a label including the predicted class of equipment and inferred properties in a machine-readable format.Type: ApplicationFiled: December 21, 2020Publication date: February 10, 2022Inventors: Marc-André Gardner, Karl-Alexandre Jahjah
-
Publication number: 20210065440Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.Type: ApplicationFiled: September 3, 2019Publication date: March 4, 2021Inventors: Kalyan Sunkavalli, Yannick Hold-Geoffroy, Christian Gagne, Marc-Andre Gardner, Jean-Francois Lalonde
-
Patent number: 10607329Abstract: Methods and systems are provided for using a single image of an indoor scene to estimate illumination of an environment that includes the portion captured in the image. A neural network system may be trained to estimate illumination by generating recovery light masks indicating a probability of each pixel within the larger environment being a light source. Additionally, low-frequency RGB images may be generated that indicating low-frequency information for the environment. The neural network system may be trained using training input images that are extracted from known panoramic images. Once trained, the neural network system infers plausible illumination information from a single image to realistically illumination images and objects being manipulated in graphics applications, such as with image compositing, modeling, and reconstruction.Type: GrantFiled: March 13, 2017Date of Patent: March 31, 2020Assignee: ADOBE INC.Inventors: Kalyan K. Sunkavalli, Xiaohui Shen, Mehmet Ersin Yumer, Marc-André Gardner, Emiliano Gambaretto
-
Publication number: 20200074600Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.Type: ApplicationFiled: November 8, 2019Publication date: March 5, 2020Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
-
Patent number: 10475169Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.Type: GrantFiled: November 28, 2017Date of Patent: November 12, 2019Assignee: Adobe Inc.Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
-
Publication number: 20190164261Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.Type: ApplicationFiled: November 28, 2017Publication date: May 30, 2019Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
-
Publication number: 20180260975Abstract: Methods and systems are provided for using a single image of an indoor scene to estimate illumination of an environment that includes the portion captured in the image. A neural network system may be trained to estimate illumination by generating recovery light masks indicating a probability of each pixel within the larger environment being a light source. Additionally, low-frequency RGB images may be generated that indicating low-frequency information for the environment. The neural network system may be trained using training input images that are extracted from known panoramic images. Once trained, the neural network system infers plausible illumination information from a single image to realistically illumination images and objects being manipulated in graphics applications, such as with image compositing, modeling, and reconstruction.Type: ApplicationFiled: March 13, 2017Publication date: September 13, 2018Inventors: KALYAN K. SUNKAVALLI, XIAOHUI SHEN, MEHMET ERSIN YUMER, MARC-ANDRÉ GARDNER, EMILIANO GAMBARETTO