Patents by Inventor Julien Philip

Julien Philip has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250139883
    Abstract: Embodiments are configured to render 3D models using an importance sampling method. First, embodiments obtain a 3D model including a plurality of density values corresponding to a plurality of locations in a 3D space, respectively. Embodiments then sample the color information from within a random subset of the plurality of locations using a probability distribution based on the plurality of density values. Embodiments have a higher probability to sample each location within the random subset of locations if the location has a higher density probability. Embodiments then an image depicting a view of the 3D model based on the sampling within the random subset of the plurality of locations.
    Type: Application
    Filed: November 1, 2023
    Publication date: May 1, 2025
    Inventors: Milos Hasan, Iliyan Georgiev, Sai Bi, Julien Philip, Kalyan K. Sunkavalli, Xin Sun, Fujun Luan, Kevin James Blackburn-Matzen, Zexiang Xu, Kai Zhang
  • Publication number: 20250078408
    Abstract: Implementations of systems and methods for determining viewpoints suitable for performing one or more digital operations on a three-dimensional object are disclosed. Accordingly, a set of candidate viewpoints is established. The subset of candidate viewpoints provides views of an outer surface of a three-dimensional object and those views provide overlapping surface data. A subset of activated viewpoints is determined from the set of candidate viewpoints, the subset of activated viewpoints providing less of the overlapping surface data. The subset of activated viewpoints is used to perform one or more digital operation on the three-dimensional object.
    Type: Application
    Filed: August 29, 2023
    Publication date: March 6, 2025
    Applicant: Adobe Inc.
    Inventors: Valentin Mathieu Deschaintre, Vladimir Kim, Thibault Groueix, Julien Philip
  • Publication number: 20240412444
    Abstract: Methods and systems disclosed herein relate generally to radiance field gradient scaling for unbiased near-camera training. In a method, a processing device accesses an input image of a three-dimensional environment comprising a plurality of pixels, each pixel comprising a pixel color. The processing device determines a camera location based on the input image and a ray from the camera location in a direction of a pixel. The processing device integrates sampled information from a volumetric representation along the ray from the camera location to obtain an integrated color. The processing device trains a machine learning model configured to predict a density and a color, comprising minimizing a loss function using a scaling factor that is determined based on a distance between the camera location and a point along the ray. The processing device outputs the trained ML model for use in rendering an output image.
    Type: Application
    Filed: June 9, 2023
    Publication date: December 12, 2024
    Inventors: Julien Philip, Valentin Deschaintre
  • Publication number: 20240404181
    Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.
    Type: Application
    Filed: August 9, 2024
    Publication date: December 5, 2024
    Inventors: Zexiang Xu, Zhixin Shu, Sai Bi, Qiangeng Xu, Kalyan Sunkavalli, Julien Philip
  • Patent number: 12073507
    Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.
    Type: Grant
    Filed: July 9, 2022
    Date of Patent: August 27, 2024
    Assignee: Adobe Inc.
    Inventors: Zexiang Xu, Zhixin Shu, Sai Bi, Qiangeng Xu, Kalyan Sunkavalli, Julien Philip
  • Publication number: 20240273813
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generates object shadows for digital images utilizing corresponding geometry-aware buffer channels. For instance, in one or more embodiments, the disclosed systems generate, utilizing a height prediction neural network, an object height map for a digital object portrayed in a digital image and a background height map for a background portrayed in the digital image. The disclosed systems also generate, from the digital image, a plurality of geometry-aware buffer channels using the object height map and the background height map. Further, the disclosed systems modify the digital image to include a soft object shadow for the digital object using the plurality of geometry-aware buffer channels.
    Type: Application
    Filed: February 14, 2023
    Publication date: August 15, 2024
    Inventors: Jianming Zhang, Yichen Sheng, Julien Philip, Yannick Hold-Geoffroy, Xin Sun, He Zhang
  • Patent number: 11972512
    Abstract: Directional propagation editing techniques are described, in one example, a digital image, a depth map, and a direction are obtained by an image editing system. The image editing system then generates features. To do so, the image editing system generates features from the digital image and the depth map for each pixel based on the direction, e.g., until an edge of the digital image is reached. In an implementation, instead of storing a value of the depth directly, a ratio is stored based on a depth in the depth map and a depth of a point along the direction. The image editing system then forms a feature volume using the features, e.g., as three dimensionally stacked features. The feature volume is employed by the image editing system as part of editing the digital image to form an edited digital image.
    Type: Grant
    Filed: January 25, 2022
    Date of Patent: April 30, 2024
    Assignee: Adobe Inc.
    Inventors: Julien Philip, David Nicholson Griffiths
  • Publication number: 20240013477
    Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.
    Type: Application
    Filed: July 9, 2022
    Publication date: January 11, 2024
    Inventors: Zexiang Xu, Zhixin Shu, Sai Bi, Qiangeng Xu, Kalyan Sunkavalli, Julien Philip
  • Publication number: 20230237718
    Abstract: Directional propagation editing techniques are described, in one example, a digital image, a depth map, and a direction are obtained by an image editing system. The image editing system then generates features. To do so, the image editing system generates features from the digital image and the depth map for each pixel based on the direction, e.g., until an edge of the digital image is reached. In an implementation, instead of storing a value of the depth directly, a ratio is stored based on a depth in the depth map and a depth of a point along the direction. The image editing system then forms a feature volume using the features, e.g., as three dimensionally stacked features. The feature volume is employed by the image editing system as part of editing the digital image to form an edited digital image.
    Type: Application
    Filed: January 25, 2022
    Publication date: July 27, 2023
    Applicant: Adobe Inc.
    Inventors: Julien Philip, David Nicholson Griffiths