Patents by Inventor Francois Lalonde

Francois Lalonde has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240127402
    Abstract: In some examples, a computing system accesses a field of view (FOV) image that has a field of view less than 360 degrees and has low dynamic range (LDR) values. The computing system estimates lighting parameters from a scene depicted in the FOV image and generates a lighting image based on the lighting parameters. The computing system further generates lighting features generated the lighting image and image features generated from the FOV image. These features are aggregated into aggregated features and a machine learning model is applied to the image features and the aggregated features to generate a panorama image having high dynamic range (HDR) values.
    Type: Application
    Filed: August 25, 2023
    Publication date: April 18, 2024
    Inventors: Mohammad Reza Karimi Dastjerdi, Yannick Hold-Geoffroy, Sai Bi, Jonathan Eisenmann, Jean-François Lalonde
  • Publication number: 20230368459
    Abstract: Methods are described for rendering a virtual object at a designated position in an input digital image corresponding to a perspective of a scene. In an embodiment, the method includes: estimating a set of lighting parameters using a lighting neural network; estimating a scene layout using a layout neural network; generating an environment texture map using a texture neural network using an input including the input digital image, the lighting parameters, and the scene layout; rendering the virtual object in a virtual scene constructed using the estimated lighting parameters, the scene layout, and the environment texture map; and compositing the rendered virtual object on the input digital image at the designated position. Corresponding systems and non-transitory computer-readable media are also described.
    Type: Application
    Filed: May 12, 2023
    Publication date: November 16, 2023
    Inventors: Mathieu GARON, Henrique WEBER, Jean-Francois LALONDE
  • Publication number: 20230360170
    Abstract: Embodiments are disclosed for generating 360-degree panoramas from input narrow field of view images. A method of generating 360-degree panoramas may include obtaining an input image and guide, generating a panoramic projection of the input image, and generating, by a panorama generator, a 360-degree panorama based on the panoramic projection and the guide, wherein the panorama generator is a guided co-modulation generator network trained to generate a 360-degree panorama from the input image based on the guide.
    Type: Application
    Filed: November 15, 2022
    Publication date: November 9, 2023
    Applicant: Adobe Inc.
    Inventors: Mohammad Reza KARIMI DASTJERDI, Yannick Hold-Geoffroy, Vladimir KIM, Jonathan EISENMANN, Jean-François LALONDE
  • Publication number: 20230245382
    Abstract: An automated and dynamic method and system are provided for estimating lighting conditions of a scene captured from a plurality of digital images. The method comprises generating 3D-source-specific-lighting parameters of the scene using a lighting-estimation neural network configured for: extracting from the plurality of images a corresponding number of latent feature vectors; transforming the latent feature vectors into common-coordinates latent feature vectors; merging the plurality of common-coordinates latent feature vectors into a single latent feature vector; and extracting, from the single latent feature vector, 3D-source-specific-lighting parameters of the scene.
    Type: Application
    Filed: June 14, 2021
    Publication date: August 3, 2023
    Inventors: Marc-Andre GARDNER, Jean-François LALONDE, Christian GAGNE
  • Publication number: 20230098115
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
    Type: Application
    Filed: December 6, 2022
    Publication date: March 30, 2023
    Inventors: Kalyan Sunkavalli, Yannick Hold-Geoffroy, Christian Gagne, Marc-Andre Gardner, Jean-Francois Lalonde
  • Patent number: 11538216
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: December 27, 2022
    Assignee: Adobe Inc.
    Inventors: Kalyan Sunkavalli, Yannick Hold-Geoffroy, Christian Gagne, Marc-Andre Gardner, Jean-Francois Lalonde
  • Patent number: 10957026
    Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate high-dynamic range lighting parameters for input low-dynamic range images. The high-dynamic range lighting parameters can be based on sky color, sky turbidity, sun color, sun shape, and sun position. Such input low-dynamic range images can be low-dynamic range panorama images or low-dynamic range standard images. Such a neural network system can apply the estimates high-dynamic range lighting parameters to objects added to the low-dynamic range images.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: March 23, 2021
    Assignee: Adobe Inc.
    Inventors: Jinsong Zhang, Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Jonathan Eisenmann, Jean-Francois Lalonde
  • Publication number: 20210073955
    Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate high-dynamic range lighting parameters for input low-dynamic range images. The high-dynamic range lighting parameters can be based on sky color, sky turbidity, sun color, sun shape, and sun position. Such input low-dynamic range images can be low-dynamic range panorama images or low-dynamic range standard images. Such a neural network system can apply the estimates high-dynamic range lighting parameters to objects added to the low-dynamic range images.
    Type: Application
    Filed: September 9, 2019
    Publication date: March 11, 2021
    Inventors: Jinsong Zhang, Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Jonathan Eisenmann, Jean-Francois Lalonde
  • Publication number: 20210065440
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
    Type: Application
    Filed: September 3, 2019
    Publication date: March 4, 2021
    Inventors: Kalyan Sunkavalli, Yannick Hold-Geoffroy, Christian Gagne, Marc-Andre Gardner, Jean-Francois Lalonde
  • Patent number: 10665011
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to render a virtual object in a digital scene by using a local-lighting-estimation-neural network to analyze both global and local features of the digital scene and generate location-specific-lighting parameters for a designated position within the digital scene. For example, the disclosed systems extract and combine such global and local features from a digital scene using global network layers and local network layers of the local-lighting-estimation-neural network. In certain implementations, the disclosed systems can generate location-specific-lighting parameters using a neural-network architecture that combines global and local feature vectors to spatially vary lighting for different positions within a digital scene.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: May 26, 2020
    Assignees: ADOBE INC., UNIVERSITÉ LAVAL
    Inventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Jean-Francois Lalonde, Mathieu Garon
  • Patent number: 9860453
    Abstract: Methods and systems for estimating HDR sky light probes for outdoor images are disclosed. A precaptured sky light probe database is leveraged. The database includes a plurality of HDR sky light probes captured under a plurality of different illumination conditions. A HDR sky light probe is estimated from an outdoor image by fitting a three dimensional model to an object of interest in the image and solving an inverse optimization lighting problem for the 3D model where the space of possible HDR sky light probes is constrained by the HDR sky light probes of the database.
    Type: Grant
    Filed: February 4, 2015
    Date of Patent: January 2, 2018
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Iain Matthews, Jean-Francois Lalonde
  • Publication number: 20170235898
    Abstract: In a system and method of patient flow and treatment management, information regarding a patient admitted to a first unit of a patient treatment facility that is received into a first one of a number of user devices is dispatched to a server computer. Upon receipt of this patient information the server computer runs (desirably in real-time) a prediction application/algorithm that predicts an estimate of (1) the patient needing a resource in the first unit or a second unit of the facility, (2) a length of time before the patient needs the resource, and/or (3) an identity of the unit that has the needed resource. The server computer then dispatches (again, desirably in real-time) one or more of the predictions to one or more of the user devices, each of which receives and displays the prediction on a display thereof.
    Type: Application
    Filed: February 13, 2017
    Publication date: August 17, 2017
    Inventors: Robert Craig Coulter, Ralph Gross, Jean-Francois Lalonde, Barbara Anne-Marie Simard
  • Patent number: 9639773
    Abstract: Methods and systems for predicting light probes for outdoor images are disclosed. A light probe database is created to learn a mapping from the outdoor image's features to predicted outdoor light probe illumination parameters. The database includes a plurality of images, image features for each of the plurality of images, and a captured light probe for each of the plurality of images. A light probe illumination model based on a sun model and sky model is fitted to the captured light probes. The light probe for the outdoor image may be predicted based on the database dataset and fitted light probe models.
    Type: Grant
    Filed: November 26, 2013
    Date of Patent: May 2, 2017
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Jean-Francois Lalonde, Iain Matthews
  • Publication number: 20160150143
    Abstract: Methods and systems for estimating HDR sky light probes for outdoor images are disclosed. A precaptured sky light probe database is leveraged. The database includes a plurality of HDR sky light probes captured under a plurality of different illumination conditions. A HDR sky light probe is estimated from an outdoor image by fitting a three dimensional model to an object of interest in the image and solving an inverse optimization lighting problem for the 3D model where the space of possible HDR sky light probes is constrained by the HDR sky light probes of the database.
    Type: Application
    Filed: February 4, 2015
    Publication date: May 26, 2016
    Applicant: DISNEY ENTERPRISES, INC.
    Inventors: IAIN MATTHEWS, JEAN-FRANCOIS LALONDE
  • Patent number: 9275445
    Abstract: Algorithms for improving the performance of conventional tone mapping operators (TMO) by calculating both a contrast waste score and a contrast loss score for a first tone-mapped image produced by the TMO. The two contrast scores can be used to optimize the performance of the TMO by reducing noise and improving contrast. Algorithms for generating an HDR image by converting non-linear color space images into linear color space format, aligning the images to a reference, de-ghosting the aligned images if necessary, and merging the aligned (and potentially de-ghosted) images to create an HDR image. The merging can be performed with exposure fusion, HDR reconstruction, or other suitable techniques.
    Type: Grant
    Filed: August 26, 2014
    Date of Patent: March 1, 2016
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Miguel Granados, Jose Rafael Tena, Tunc Ozan Aydin, Jean Francois Lalonde, Christian Theobalt, Iain Matthews
  • Publication number: 20150146972
    Abstract: Methods and systems for predicting light probes for outdoor images are disclosed. A light probe database is created to learn a mapping from the outdoor image's features to predicted outdoor light probe illumination parameters. The database includes a plurality of images, image features for each of the plurality of images, and a captured light probe for each of the plurality of images. A light probe illumination model based on a sun model and sky model is fitted to the captured light probes. The light probe for the outdoor image may be predicted based on the database dataset and fitted light probe models.
    Type: Application
    Filed: November 26, 2013
    Publication date: May 28, 2015
    Applicant: DISNEY ENTERPRISES, INC.
    Inventors: JEAN-FRANCOIS LALONDE, IAIN MATTHEWS
  • Publication number: 20150078661
    Abstract: Algorithms for improving the performance of conventional tone mapping operators (TMO) by calculating both a contrast waste score and a contrast loss score for a first tone-mapped image produced by the TMO. The two contrast scores can be used to optimize the performance of the TMO by reducing noise and improving contrast. Algorithms for generating an HDR image by converting non-linear color space images into linear color space format, aligning the images to a reference, de-ghosting the aligned images if necessary, and merging the aligned (and potentially de-ghosted) images to create an HDR image. The merging can be performed with exposure fusion, HDR reconstruction, or other suitable techniques.
    Type: Application
    Filed: August 26, 2014
    Publication date: March 19, 2015
    Inventors: MIGUEL GRANADOS, JOSE RAFAEL TENA, TUNC OZAN AYDIN, JEAN FRANCOIS LALONDE, CHRISTIAN THEOBALT, IAIN MATTHEWS
  • Patent number: 8983183
    Abstract: In a first exemplary embodiment, an automated, computerized method is provided for processing an image. The method includes the steps of providing an image file depicting an image defined by image locations, in a computer memory, generating a bi-illuminant chromaticity plane in a log color space for representing the image locations of the image in a log-chromaticity representation for the image, providing a set of estimates for an orientation of the bi-illuminant chromaticity plane and calculating a single orientation for each one of the image locations as a function of the set of estimates for an orientation.
    Type: Grant
    Filed: June 12, 2014
    Date of Patent: March 17, 2015
    Assignee: Tandent Vision Sciencem, Inc.
    Inventor: Jean-Francois Lalonde
  • Patent number: 8934735
    Abstract: In a first exemplary embodiment of the present invention, an automated, computerized method is provided for processing an image. According to a feature of the present invention, the method comprises the steps of providing an image represented in a spatial plane, organizing spatio-spectral information for the image in a matrix equation expressed by normal and tangential constraints determined as a function of a spatial orientation of selected image locations, relative to the spatial plane of the image and utilizing the matrix equation to solve for an intrinsic image corresponding to the image.
    Type: Grant
    Filed: September 7, 2012
    Date of Patent: January 13, 2015
    Assignee: Tandent Vision Science, Inc.
    Inventors: Andrew Neil Stein, Jean-Francois Lalonde
  • Publication number: 20140294296
    Abstract: In a first exemplary embodiment, an automated, computerized method is provided for processing an image. The method includes the steps of providing an image file depicting an image defined by image locations, in a computer memory, generating a bi-illuminant chromaticity plane in a log color space for representing the image locations of the image in a log-chromaticity representation for the image, providing a set of estimates for an orientation of the bi-illuminant chromaticity plane and calculating a single orientation for each one of the image locations as a function of the set of estimates for an orientation.
    Type: Application
    Filed: June 12, 2014
    Publication date: October 2, 2014
    Inventor: Jean-Francois LALONDE