Patents by Inventor Marc-Andre Gardner
Marc-Andre Gardner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12288411Abstract: In example embodiments, techniques are provided that use two different ML models (a symbol association ML model and a link association ML model), one to extract associations between text labels and one to extract associations between symbols and links, in a schematic diagram (e.g., P&ID) in an image-only format. The two models may use different ML architectures. For example, the symbol association ML model may use a deep learning neural network architecture that receives for each possible text label and symbol pair both a context and a request, and produces a score indicating confidence the pair is associated. The link association ML model may use a gradient boosting tree architecture that receives for each possible text label and link pair a set of multiple features describing at least the geometric relationship between the possible text label and link pair and produces a score indicating confidence the pair is associated.Type: GrantFiled: October 6, 2022Date of Patent: April 29, 2025Assignee: Bentley Systems, IncorporatedInventors: Marc-Andrè Gardner, Simon Savary, Louis-Philippe Asselin
-
Publication number: 20250110759Abstract: In example embodiments, techniques are provided for determining next command recommendations using a trained recurrent neural network model. A command prediction module of an application gathers command data and user characteristic data for a user, and cleans the command data to produce an input dataset. The command prediction module applies the input dataset to a trained recurrent neural network model, where the trained recurrent neural network model is configured to produce a separate next command prediction for each of a plurality of different values of one or more user characteristics. The command prediction module selects one or more recommended next commands from within the next command prediction produced for a value of one or more user characteristics that correspond to the user characteristic data for the user, and provides the one or more recommended next commands for display in a user interface of the application.Type: ApplicationFiled: May 9, 2023Publication date: April 3, 2025Inventors: Lucas Flett, Stéphane Côté, Marc-André Gardner
-
Patent number: 12175337Abstract: In example embodiments, techniques are provided for using machine learning to extract machine-readable labels for text boxes and symbols in P&IDs in image-only formats. A P&ID data extraction application uses an optical character recognition (OCR) algorithm to predict labels for text boxes in a P&ID. The P&ID data extraction application uses a first machine learning algorithm to detect symbols in the P&ID and return a predicted bounding box and predicted class of equipment for each symbol. One or more of the predicted bounding boxes may be decimate by non-maximum suppression to avoid overlapping detections. The P&ID data extraction application uses a second machine learning algorithm to infer properties for each detected symbol having a remaining predicted bounding box. The P&ID data extraction application stores the predicted bounding box and a label including the predicted class of equipment and inferred properties in a machine-readable format.Type: GrantFiled: December 21, 2020Date of Patent: December 24, 2024Assignee: Bentley Systems, IncorporatedInventors: Marc-André Gardner, Karl-Alexandre Jahjah
-
Publication number: 20240378426Abstract: In example embodiments, improved techniques are provided for classifying elements of an infrastructure model that represents linear infrastructure (e.g., roads). The techniques may extract a set of cross sections perpendicular to a centerline of the linear infrastructure from the infrastructure model, generate a graph representation of each cross section to produce a set of graphs having nodes that represent elements and edges that represent contextual relationships, provide the set of graphs to a trained graph neural network (GNN) model, and produce therefrom class predictions for the elements. The class predictions may include one or more predicted classes for each element with a respective confidence. A best predicted class for each element may be selected and assigned to the element, thereby creating a new version of the infrastructure model. For elements that extend through multiple cross sections, the selection may involve aggregating predicted classes originating from the different graphs.Type: ApplicationFiled: May 8, 2023Publication date: November 14, 2024Inventors: Louis-Philippe Asselin, Karl-Alexandre Jahjah, Marc-André Gardner, Samuel Lamhamedi
-
Patent number: 12106430Abstract: An automated and dynamic method and system are provided for estimating lighting conditions of a scene captured from a plurality of digital images. The method comprises generating 3D-source-specific-lighting parameters of the scene using a lighting-estimation neural network configured for: extracting from the plurality of images a corresponding number of latent feature vectors; transforming the latent feature vectors into common-coordinates latent feature vectors; merging the plurality of common-coordinates latent feature vectors into a single latent feature vector; and extracting, from the single latent feature vector, 3D-source-specific-lighting parameters of the scene.Type: GrantFiled: June 14, 2021Date of Patent: October 1, 2024Assignee: Depix Technologies Inc.Inventors: Marc-Andre Gardner, Jean-François Lalonde, Christian Gagne
-
Patent number: 12017691Abstract: In example embodiments, techniques are provided for using machine learning to predict railroad track geometry exceedances to enable proactive maintenance. A machine learning model of a rail operational analytics application may be trained to directly output a probability of future railroad track geometry exceedances for each portion of track of a railroad. Training may be performed using all available railroad track data, and the task of selecting which data is relevant to predicting probability of railroad track geometry exceedances may be devolved to the machine learning model. Further, assumptions about the specific railroad and data characteristics may be avoided, providing the machine learning model flexibility, and allowing for dynamic changes in the problem formulation.Type: GrantFiled: September 8, 2021Date of Patent: June 25, 2024Assignee: Bentley Systems, IncorporatedInventors: Marc-André Gardner, Marc-André Lapointe, Lucas Flett, Simon Savary, Andrew Smith
-
Patent number: 12008710Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.Type: GrantFiled: December 6, 2022Date of Patent: June 11, 2024Assignees: Adobe Inc., Universite LavalInventors: Kalyan Sunkavalli, Yannick Hold-Geoffroy, Christian Gagne, Marc-Andre Gardner, Jean-Francois Lalonde
-
Publication number: 20240119751Abstract: In example embodiments, techniques are provided that use two different ML models (a symbol association ML model and a link association ML model), one to extract associations between text labels and one to extract associations between symbols and links, in a schematic diagram (e.g., P&ID) in an image-only format. The two models may use different ML architectures. For example, the symbol association ML model may use a deep learning neural network architecture that receives for each possible text label and symbol pair both a context and a request, and produces a score indicating confidence the pair is associated. The link association ML model may use a gradient boosting tree architecture that receives for each possible text label and link pair a set of multiple features describing at least the geometric relationship between the possible text label and link pair and produces a score indicating confidence the pair is associated.Type: ApplicationFiled: October 6, 2022Publication date: April 11, 2024Inventors: Marc-Andre Gardner, Simon Savary, Louis-Philippe Asselin
-
Patent number: 11842035Abstract: In example embodiments, techniques are provided for efficiently labeling, reviewing and correcting predictions for P&IDs in image-only formats. To label text boxes in the P&ID, the labeling application executes an OCR algorithm to predict a bounding box around, and machine-readable text within, each text box, and displays these predictions in its user interface. The labeling application provides functionality to receive a user confirmation or correction for each predicted bounding box and predicted machine-readable text. To label symbols in the P&ID, the labeling application receives user input to draw bounding boxes around symbols and assign symbols to classes of equipment. Where there are multiple occurrences of specific symbols, the labeling application provides functionality to duplicate and automatically detect and assign bounding boxes and classes.Type: GrantFiled: December 21, 2020Date of Patent: December 12, 2023Assignee: Bentley Systems, IncorporatedInventors: Karl-Alexandre Jahjah, Marc-André Gardner
-
Publication number: 20230245382Abstract: An automated and dynamic method and system are provided for estimating lighting conditions of a scene captured from a plurality of digital images. The method comprises generating 3D-source-specific-lighting parameters of the scene using a lighting-estimation neural network configured for: extracting from the plurality of images a corresponding number of latent feature vectors; transforming the latent feature vectors into common-coordinates latent feature vectors; merging the plurality of common-coordinates latent feature vectors into a single latent feature vector; and extracting, from the single latent feature vector, 3D-source-specific-lighting parameters of the scene.Type: ApplicationFiled: June 14, 2021Publication date: August 3, 2023Inventors: Marc-Andre GARDNER, Jean-François LALONDE, Christian GAGNE
-
Publication number: 20230098115Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.Type: ApplicationFiled: December 6, 2022Publication date: March 30, 2023Inventors: Kalyan Sunkavalli, Yannick Hold-Geoffroy, Christian Gagne, Marc-Andre Gardner, Jean-Francois Lalonde
-
Patent number: 11538216Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.Type: GrantFiled: September 3, 2019Date of Patent: December 27, 2022Assignee: Adobe Inc.Inventors: Kalyan Sunkavalli, Yannick Hold-Geoffroy, Christian Gagne, Marc-Andre Gardner, Jean-Francois Lalonde
-
Patent number: 11443412Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.Type: GrantFiled: November 8, 2019Date of Patent: September 13, 2022Assignee: ADOBE INC.Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
-
Publication number: 20220044146Abstract: In example embodiments, techniques are provided for using machine learning to extract machine-readable labels for text boxes and symbols in P&IDs in image-only formats. A P&ID data extraction application uses an optical character recognition (OCR) algorithm to predict labels for text boxes in a P&ID. The P&ID data extraction application uses a first machine learning algorithm to detect symbols in the P&ID and return a predicted bounding box and predicted class of equipment for each symbol. One or more of the predicted bounding boxes may be decimate by non-maximum suppression to avoid overlapping detections. The P&ID data extraction application uses a second machine learning algorithm to infer properties for each detected symbol having a remaining predicted bounding box. The P&ID data extraction application stores the predicted bounding box and a label including the predicted class of equipment and inferred properties in a machine-readable format.Type: ApplicationFiled: December 21, 2020Publication date: February 10, 2022Inventors: Marc-André Gardner, Karl-Alexandre Jahjah
-
Publication number: 20220043547Abstract: In example embodiments, techniques are provided for efficiently labeling, reviewing and correcting predictions for P&IDs in image-only formats. To label text boxes in the P&ID, the labeling application executes an OCR algorithm to predict a bounding box around, and machine-readable text within, each text box, and displays these predictions in its user interface. The labeling application provides functionality to receive a user confirmation or correction for each predicted bounding box and predicted machine-readable text. To label symbols in the P&ID, the labeling application receives user input to draw bounding boxes around symbols and assign symbols to classes of equipment. Where there are multiple occurrences of specific symbols, the labeling application provides functionality to duplicate and automatically detect and assign bounding boxes and classes.Type: ApplicationFiled: December 21, 2020Publication date: February 10, 2022Inventors: Karl-Alexandre Jahjah, Marc-André Gardner
-
Publication number: 20210065440Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.Type: ApplicationFiled: September 3, 2019Publication date: March 4, 2021Inventors: Kalyan Sunkavalli, Yannick Hold-Geoffroy, Christian Gagne, Marc-Andre Gardner, Jean-Francois Lalonde
-
Patent number: 10607329Abstract: Methods and systems are provided for using a single image of an indoor scene to estimate illumination of an environment that includes the portion captured in the image. A neural network system may be trained to estimate illumination by generating recovery light masks indicating a probability of each pixel within the larger environment being a light source. Additionally, low-frequency RGB images may be generated that indicating low-frequency information for the environment. The neural network system may be trained using training input images that are extracted from known panoramic images. Once trained, the neural network system infers plausible illumination information from a single image to realistically illumination images and objects being manipulated in graphics applications, such as with image compositing, modeling, and reconstruction.Type: GrantFiled: March 13, 2017Date of Patent: March 31, 2020Assignee: ADOBE INC.Inventors: Kalyan K. Sunkavalli, Xiaohui Shen, Mehmet Ersin Yumer, Marc-André Gardner, Emiliano Gambaretto
-
Publication number: 20200074600Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.Type: ApplicationFiled: November 8, 2019Publication date: March 5, 2020Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
-
Patent number: 10475169Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.Type: GrantFiled: November 28, 2017Date of Patent: November 12, 2019Assignee: Adobe Inc.Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
-
Publication number: 20190164261Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.Type: ApplicationFiled: November 28, 2017Publication date: May 30, 2019Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto