Patents by Inventor Yannick Hold-Geoffroy
Yannick Hold-Geoffroy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240144586Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images.Type: ApplicationFiled: April 20, 2023Publication date: May 2, 2024Inventors: Yannick Hold-Geoffroy, Vojtech Krs, Radomir Mech, Nathan Carr, Matheus Gadelha
-
Publication number: 20240143835Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating anonymized digital images utilizing a face anonymization neural network. In some embodiments, the disclosed systems utilize a face anonymization neural network to extract or encode a face anonymization guide that encodes face attribute features, such as gender, ethnicity, age, and expression. In some cases, the disclosed systems utilize the face anonymization guide to inform the face anonymization neural network in generating synthetic face pixels for anonymizing a digital image while retaining attributes, such as gender, ethnicity, age, and expression. The disclosed systems learn parameters for a face anonymization neural network for preserving face attributes, accounting for multiple faces in digital images, and generating synthetic face pixels for faces in profile poses.Type: ApplicationFiled: November 2, 2022Publication date: May 2, 2024Inventors: Siavash Khodadadeh, Ratheesh Kalarot, Shabnam Ghadar, Yannick Hold-Geoffroy
-
Patent number: 11972534Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a visual neural network to replace materials in a three-dimensional scene with visually similar materials from a source dataset. Specifically, the disclosed system utilizes the visual neural network to generate source deep visual features representing source texture maps from materials in a plurality of source materials. Additionally, the disclosed system utilizes the visual neural network to generate deep visual features representing texture maps from materials in a digital scene. The disclosed system then determines source texture maps that are visually similar to the texture maps of the digital scene based on visual similarity metrics that compare the source deep visual features and the deep visual features. Additionally, the disclosed system modifies the digital scene by replacing one or more of the texture maps in the digital scene with the visually similar source texture maps.Type: GrantFiled: November 5, 2021Date of Patent: April 30, 2024Assignee: Adobe Inc.Inventors: Maxine Perroni-Scharf, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Jonathan Eisenmann
-
Publication number: 20240135612Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images.Type: ApplicationFiled: April 20, 2023Publication date: April 25, 2024Inventors: Yannick Hold-Geoffroy, Vojtech Krs, Radomir Mech, Nathan Carr, Matheus Gadelha
-
Publication number: 20240127509Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images.Type: ApplicationFiled: April 20, 2023Publication date: April 18, 2024Inventors: Yannick Hold-Geoffroy, Jianming Zhang, Byeonguk Lee
-
Publication number: 20240127402Abstract: In some examples, a computing system accesses a field of view (FOV) image that has a field of view less than 360 degrees and has low dynamic range (LDR) values. The computing system estimates lighting parameters from a scene depicted in the FOV image and generates a lighting image based on the lighting parameters. The computing system further generates lighting features generated the lighting image and image features generated from the FOV image. These features are aggregated into aggregated features and a machine learning model is applied to the image features and the aggregated features to generate a panorama image having high dynamic range (HDR) values.Type: ApplicationFiled: August 25, 2023Publication date: April 18, 2024Inventors: Mohammad Reza Karimi Dastjerdi, Yannick Hold-Geoffroy, Sai Bi, Jonathan Eisenmann, Jean-François Lalonde
-
Patent number: 11887241Abstract: Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.Type: GrantFiled: December 22, 2021Date of Patent: January 30, 2024Assignee: Adobe Inc.Inventors: Zexiang Xu, Yannick Hold-Geoffroy, Milos Hasan, Kalyan Sunkavalli, Fanbo Xiang
-
Patent number: 11854115Abstract: A vectorized caricature avatar generator receives a user image from which face parameters are generated. Segments of the user image including certain facial features (e.g., hair, facial hair, eyeglasses) are also identified. Segment parameter values are also determined, the segment parameter values being those parameter values from a set of caricature avatars that correspond to the segments of the user image. The face parameter values and the segment parameter values are used to generate a caricature avatar of the user in the user image.Type: GrantFiled: November 4, 2021Date of Patent: December 26, 2023Assignee: Adobe Inc.Inventors: Daichi Ito, Yijun Li, Yannick Hold-Geoffroy, Koki Madono, Jose Ignacio Echevarria Vallespi, Cameron Younger Smith
-
Publication number: 20230360170Abstract: Embodiments are disclosed for generating 360-degree panoramas from input narrow field of view images. A method of generating 360-degree panoramas may include obtaining an input image and guide, generating a panoramic projection of the input image, and generating, by a panorama generator, a 360-degree panorama based on the panoramic projection and the guide, wherein the panorama generator is a guided co-modulation generator network trained to generate a 360-degree panorama from the input image based on the guide.Type: ApplicationFiled: November 15, 2022Publication date: November 9, 2023Applicant: Adobe Inc.Inventors: Mohammad Reza KARIMI DASTJERDI, Yannick Hold-Geoffroy, Vladimir KIM, Jonathan EISENMANN, Jean-François LALONDE
-
Publication number: 20230306637Abstract: Systems and methods for image dense field based view calibration are provided. In one embodiment, an input image is applied to a dense field machine learning model that generates a vertical vector dense field (VVF) and a latitude dense field (LDF) from the input image. The VVF comprises a vertical vector of a projected vanishing point direction for each of the pixels of the input image. The latitude dense field (LDF) comprises a projected latitude value for the pixels of the input image. A dense field map for the input image comprising the VVF and the LDF can be directly or indirectly used for a variety of image processing manipulations. The VVF and LDF can be optionally used to derive traditional camera calibration parameters from uncontrolled images that have undergone undocumented or unknown manipulations.Type: ApplicationFiled: March 28, 2022Publication date: September 28, 2023Inventors: Jianming ZHANG, Linyi JIN, Kevin MATZEN, Oliver WANG, Yannick HOLD-GEOFFROY
-
Publication number: 20230244940Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for generating an ambient occlusion (AO) map for a 2D image that can be combined with the 2D image to adjust the contrast of the 2D image based on the geometric information in the 2D image. In embodiments, using a trained neural network, an AO map for a 2D image is automatically generated without any predefined 3D scene information. Optimizing the neural network to generate an estimated AO map for a 2D image requires training, testing, and validating the neural network using a synthetic dataset comprised of pairs of images and ground truth AO maps rendered from 3D scenes. By using an estimated AO map to adjust the contrast of a 2D image, the contrast of the image can be adjusted to make the image appear lifelike by modifying the shadows and shading in the image based on the ambient lighting present in the image.Type: ApplicationFiled: April 6, 2023Publication date: August 3, 2023Inventors: Long MAI, Yannick Hold-Geoffroy, Naoto Inoue, Daichi Ito, Brian Lynn Price
-
Patent number: 11663467Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for generating an ambient occlusion (AO) map for a 2D image that can be combined with the 2D image to adjust the contrast of the 2D image based on the geometric information in the 2D image. In embodiments, using a trained neural network, an AO map for a 2D image is automatically generated without any predefined 3D scene information. Optimizing the neural network to generate an estimated AO map for a 2D image requires training, testing, and validating the neural network using a synthetic dataset comprised of pairs of images and ground truth AO maps rendered from 3D scenes. By using an estimated AO map to adjust the contrast of a 2D image, the contrast of the image can be adjusted to make the image appear lifelike by modifying the shadows and shading in the image based on the ambient lighting present in the image.Type: GrantFiled: November 21, 2019Date of Patent: May 30, 2023Assignee: ADOBE INC.Inventors: Long Mai, Yannick Hold-Geoffroy, Naoto Inoue, Daichi Ito, Brian Lynn Price
-
Patent number: 11663775Abstract: Methods, system, and computer storage media are provided for generating physical-based materials for rendering digital objects with an appearance of a real-world material. Images depicted the real-world material, including diffuse component images and specular component images, are captured using different lighting patterns, which may include area lights. From the captured images, approximations of one or more material maps are determined using a photometric stereo technique. Based on the approximations and the captured images, a neural network system generates a set of material maps, such as a diffuse albedo material map, a normal material map, a specular albedo material map, and a roughness material map. The material maps from the neural network may be optimized based on a comparison of the input images of the real-world material and images rendered from the material maps.Type: GrantFiled: April 19, 2021Date of Patent: May 30, 2023Assignee: Adobe, Inc.Inventors: Akshat Dave, Kalyan Krishna Sunkavalli, Yannick Hold-Geoffroy, Milos Hasan
-
Publication number: 20230141395Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a visual neural network to replace materials in a three-dimensional scene with visually similar materials from a source dataset. Specifically, the disclosed system utilizes the visual neural network to generate source deep visual features representing source texture maps from materials in a plurality of source materials. Additionally, the disclosed system utilizes the visual neural network to generate deep visual features representing texture maps from materials in a digital scene. The disclosed system then determines source texture maps that are visually similar to the texture maps of the digital scene based on visual similarity metrics that compare the source deep visual features and the deep visual features. Additionally, the disclosed system modifies the digital scene by replacing one or more of the texture maps in the digital scene with the visually similar source texture maps.Type: ApplicationFiled: November 5, 2021Publication date: May 11, 2023Inventors: Maxine Perroni-Scharf, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Jonathan Eisenmann
-
Publication number: 20230140146Abstract: A vectorized caricature avatar generator receives a user image from which face parameters are generated. Segments of the user image including certain facial features (e.g., hair, facial hair, eyeglasses) are also identified. Segment parameter values are also determined, the segment parameter values being those parameter values from a set of caricature avatars that correspond to the segments of the user image. The face parameter values and the segment parameter values are used to generate a caricature avatar of the user in the user image.Type: ApplicationFiled: November 4, 2021Publication date: May 4, 2023Applicant: Adobe Inc.Inventors: Daichi Ito, Yijun Li, Yannick Hold-Geoffroy, Koki Madono, Jose Ignacio Echevarria Vallespi, Cameron Younger Smith
-
Publication number: 20230098115Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.Type: ApplicationFiled: December 6, 2022Publication date: March 30, 2023Inventors: Kalyan Sunkavalli, Yannick Hold-Geoffroy, Christian Gagne, Marc-Andre Gardner, Jean-Francois Lalonde
-
Patent number: 11568642Abstract: Methods and systems are provided for facilitating large-scale augmented reality in relation to outdoor scenes using estimated camera pose information. In particular, camera pose information for an image can be estimated by matching the image to a rendered ground-truth terrain model with known camera pose information. To match images with such renders, data driven cross-domain feature embedding can be learned using a neural network. Cross-domain feature descriptors can be used for efficient and accurate feature matching between the image and the terrain model renders. This feature matching allows images to be localized in relation to the terrain model, which has known camera pose information. This known camera pose information can then be used to estimate camera pose information in relation to the image.Type: GrantFiled: October 12, 2020Date of Patent: January 31, 2023Assignee: Adobe Inc.Inventors: Michal Lukác, Oliver Wang, Jan Brejcha, Yannick Hold-Geoffroy, Martin {hacek over (C)}adík
-
Patent number: 11538216Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.Type: GrantFiled: September 3, 2019Date of Patent: December 27, 2022Assignee: Adobe Inc.Inventors: Kalyan Sunkavalli, Yannick Hold-Geoffroy, Christian Gagne, Marc-Andre Gardner, Jean-Francois Lalonde
-
Patent number: 11488342Abstract: Embodiments of the technology described herein, make unknown material-maps in a Physically Based Rendering (PBR) asset usable through an identification process that relies, at least in part, on image analysis. In addition, when a desired material-map type is completely missing from a PBR asset the technology described herein may generate a suitable synthetic material map for use in rendering. In one aspect, the correct map type is assigned using a machine classifier, such as a convolutional neural network, which analyzes image content of the unknown material map and produce a classification. The technology described herein also correlates material maps into material definitions using a combination of the material-map type and similarity analysis. The technology described herein may generate synthetic maps to be used in place of the missing material maps. The synthetic maps may be generated using a Generative Adversarial Network (GAN).Type: GrantFiled: May 27, 2021Date of Patent: November 1, 2022Assignee: ADOBE INC.Inventors: Kalyan Krishna Sunkavalli, Yannick Hold-Geoffroy, Milos Hasan, Zexiang Xu, Yu-Ying Yeh, Stefano Corazza
-
Publication number: 20220335636Abstract: A scene reconstruction system renders images of a scene with high-quality geometry and appearance and supports view synthesis, relighting, and scene editing. Given a set of input images of a scene, the scene reconstruction system trains a network to learn a volume representation of the scene that includes separate geometry and reflectance parameters. Using the volume representation, the scene reconstruction system can render images of the scene under arbitrary viewing (view synthesis) and lighting (relighting) locations. Additionally, the scene reconstruction system can render images that change the reflectance of objects in the scene (scene editing).Type: ApplicationFiled: April 15, 2021Publication date: October 20, 2022Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, Milos Hasan, Yannick Hold-Geoffroy, David Jay Kriegman, Ravi Ramamoorthi