Patents by Inventor Julien Pascal Christophe VALENTIN
Julien Pascal Christophe VALENTIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240265610Abstract: A cage of primitive 3D elements and associated animation data is received. Compute a ray from a virtual camera through a pixel into the cage animated according to the animation data and compute a plurality of samples on the ray. Compute a transformation of the samples into a canonical cage. For each transformed sample, query a plurality of learnt radiance field parameterizations, each learnt on a different deformed state of the 3D scene to obtain color values for each learnt radiance field. For each transformed sample, query a learnt radiance field parameterization of the 3D scene to obtain an opacity value. Compute, for each transformed sample, a weighted combination of the color values, wherein the weights are related to the local features. A volume rendering method is applied to the weighted combinations of the color and the opacity values producing a pixel value.Type: ApplicationFiled: February 3, 2023Publication date: August 8, 2024Inventors: Marek Adam KOWALSKI, Stephan Joachim GARBIN, Virginia ESTELLERS CASAS, Julien Pascal Christophe VALENTIN, Kacper KANIA
-
Publication number: 20240037829Abstract: To compute an image of a dynamic 3D scene comprising a 3D object, a description of a deformation of the 3D object is received, the description comprising a cage of primitive 3D elements and associated animation data from a physics engine or an articulated object model. For a pixel of the image the method computes a ray from a virtual camera through the pixel into the cage animated according to the animation data and computes a plurality of samples on the ray. Each sample is a 3D position and view direction in one of the 3D elements. The method computes a transformation of the samples into a canonical cage. For each transformed sample, the method queries a learnt radiance field parameterization of the 3D scene to obtain a color value and an opacity value. A volume rendering method is applied to the color and opacity values producing a pixel value of the image.Type: ApplicationFiled: September 19, 2022Publication date: February 1, 2024Inventors: Julien Pascal Christophe VALENTIN, Virginia ESTELLERS CASAS, Shideh REZAEIFAR, Jingjing SHEN, Stanislaw Kacper SZYMANOWICZ, Stephan Joachim GARBIN, Marek Adam KOWALSKI, Matthew Alastair JOHNSON
-
Publication number: 20230326238Abstract: A neural optimizer is disclosed that is easily applicable to different fitting problems, can run at interactive rates without requiring significant efforts, does not require hand crafted priors, carries over information about previous iterations of the solve, controls the learning rate of each parameter independently for robustness and convergence speed, and combines updates from gradient descent and from a method capable of very quickly reducing the fitting energy. A neural fitter estimates the values of the parameters ? by iteratively updating an initial estimate ?0.Type: ApplicationFiled: April 12, 2022Publication date: October 12, 2023Inventors: Julien Pascal Christophe VALENTIN, Federica BOGO, Vasileios CHOUTAS, Jingjing SHEN
-
Publication number: 20230316552Abstract: The techniques described herein disclose a system that is configured to detect and track the three-dimensional pose of an object (e.g., a head-mounted display device) in a color image using an accessible three-dimensional model of the object. The system uses the three-dimensional pose of the object to repair pixel depth values associated with a region (e.g., a surface) of the object that is composed of material that absorbs light emitted by a time-of-flight depth sensor to determine depth. Consequently, a color-depth image (e.g., a Red-Green-Blue-Depth image or RGB-D image) can be produced that does not include dark holes on and around the region of the object that is composed of material that absorbs light emitted by the time-of-flight depth sensor.Type: ApplicationFiled: April 4, 2022Publication date: October 5, 2023Inventors: JingJing SHEN, Erroll William WOOD, Toby SHARP, Ivan RAZUMENIC, Tadas BALTRUSAITIS, Julien Pascal Christophe VALENTIN, Predrag JOVANOVIC
-
Publication number: 20230281945Abstract: Keypoints are predicted in an image. A neural network is executed that is configured to predict each of the keypoints as a 2D random variable, normally distributed with a 2D position and 2×2 covariance matrix. The neural network is trained to maximize a log-likelihood that samples from each of the predicted keypoints equal a ground truth. The trained neural network is used to predict keypoints of an image without generating a heatmap.Type: ApplicationFiled: June 28, 2022Publication date: September 7, 2023Inventors: Thomas Joseph CASHMAN, Erroll William WOOD, Martin DE LA GORCE, Tadas BALTRUSAITIS, Daniel Stephen WILDE, Jingjing SHEN, Matthew Alastair JOHNSON, Julien Pascal Christophe VALENTIN
-
Publication number: 20230281863Abstract: Keypoints are predicted in an image. Predictions are generated for each of the keypoints of an image as a 2D random variable, normally distributed with location (x, y) and standard deviation sigma. A neural network is trained to maximize a log-likelihood that samples from each of the predicted keypoints equal a ground truth. The trained neural network is used to predict keypoints of an image without generating a heatmap.Type: ApplicationFiled: June 28, 2022Publication date: September 7, 2023Inventors: Julien Pascal Christophe VALENTIN, Erroll William WOOD, Thomas Joseph CASHMAN, Martin de LA GORCE, Tadas BALTRUSAITIS, Daniel Stephen WILDE, Jingjing SHEN, Matthew Alastair JOHNSON, Charles Thomas HEWITT, Nikola MILOSAVLJEVIC, Stephan Joachim GARBIN, Toby SHARP, Ivan STOJILJKOVIC
-
Patent number: 11710025Abstract: A system receives input from a user to initiate a process of generating a holodouble of the user. The system obtains image data of the user and deconstructs the image data to obtain a set of sparse data that identifies one or more attributes associated with the image data the user. The system uses a holodouble training model to generate and train the holodouble of the user based on the set of sparse data and obtained image data. The system renders a representation of the holodouble to the user concurrently while capturing new image data of the user, receives input from the user comprising approval of the holodouble, and completes training of the holodouble by saving the holodouble for subsequent use. The subsequent use includes one or more remote visual communication sessions.Type: GrantFiled: July 12, 2022Date of Patent: July 25, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Julien Pascal Christophe Valentin, Erik Alexander Hill
-
Publication number: 20220343133Abstract: A system receives input from a user to initiate a process of generating a holodouble of the user. The system obtains image data of the user and deconstructs the image data to obtain a set of sparse data that identifies one or more attributes associated with the image data the user. The system uses a holodouble training model to generate and train the holodouble of the user based on the set of sparse data and obtained image data. The system renders a representation of the holodouble to the user concurrently while capturing new image data of the user, receives input from the user comprising approval of the holodouble, and completes training of the holodouble by saving the holodouble for subsequent use. The subsequent use includes one or more remote visual communication sessions.Type: ApplicationFiled: July 12, 2022Publication date: October 27, 2022Inventors: Julien Pascal Christophe VALENTIN, Erik Alexander HILL
-
Patent number: 11429835Abstract: A system receives input from a user to initiate a process of generating a holodouble of the user. The system obtains image data of the user and deconstructs the image data to obtain a set of sparse data that identifies one or more attributes associated with the image data the user. The system uses a holodouble training model to generate and train the holodouble of the user based on the set of sparse data and obtained image data. The system renders a representation of the holodouble to the user concurrently while capturing new image data of the user, receives input from the user comprising approval of the holodouble, and completes training of the holodouble by saving the holodouble for subsequent use. The subsequent use includes one or more remote visual communication sessions.Type: GrantFiled: February 12, 2021Date of Patent: August 30, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Julien Pascal Christophe Valentin, Erik Alexander Hill
-
Patent number: 11037026Abstract: Values of pixels in an image are mapped to a binary space using a first function that preserves characteristics of values of the pixels. Labels are iteratively assigned to the pixels in the image in parallel based on a second function. The label assigned to each pixel is determined based on values of a set of nearest-neighbor pixels. The first function is trained to map values of pixels in a set of training images to the binary space and the second function is trained to assign labels to the pixels in the set of training images. Considering only the nearest neighbors in the inference scheme results in a computational complexity that is independent of the size of the solution space and produces sufficient approximations of the true distribution when the solution for each pixel is most likely found in a small subset of the set of potential solutions.Type: GrantFiled: January 22, 2020Date of Patent: June 15, 2021Assignee: Google LLCInventors: Sean Ryan Fanello, Julien Pascal Christophe Valentin, Adarsh Prakash Murthy Kowdle, Christoph Rhemann, Vladimir Tankovich, Philip L. Davidson, Shahram Izadi
-
Patent number: 10997457Abstract: Methods, systems, and media for relighting images using predicted deep reflectance fields are provided.Type: GrantFiled: October 16, 2019Date of Patent: May 4, 2021Assignee: Google LLCInventors: Christoph Rhemann, Abhimitra Meka, Matthew Whalen, Jessica Lynn Busch, Sofien Bouaziz, Geoffrey Douglas Harvey, Andrea Tagliasacchi, Jonathan Taylor, Paul Debevec, Peter Joseph Denny, Sean Ryan Francesco Fanello, Graham Fyffe, Jason Angelo Dourgarian, Xueming Yu, Adarsh Prakash Murthy Kowdle, Julien Pascal Christophe Valentin, Peter Christopher Lincoln, Rohit Kumar Pandey, Christian Häne, Shahram Izadi
-
Publication number: 20200372284Abstract: Methods, systems, and media for relighting images using predicted deep reflectance fields are provided.Type: ApplicationFiled: October 16, 2019Publication date: November 26, 2020Inventors: Christoph Rhemann, Abhimitra Meka, Matthew Whalen, Jessica Lynn Busch, Sofien Bouaziz, Geoffrey Douglas Harvey, Andrea Tagliasacchi, Jonathan Taylor, Paul Debevec, Peter Joseph Denny, Sean Ryan Francesco Fanello, Graham Fyffe, Jason Angelo Dourgarian, Xueming Yu, Adarsh Prakash Murthy Kowdle, Julien Pascal Christophe Valentin, Peter Christopher Lincoln, Rohit Kumar Pandey, Christian Häne, Shahram Izadi
-
Patent number: 10839539Abstract: An electronic device estimates a depth map of an environment based on stereo depth images captured by depth cameras having exposure times that are offset from each other in conjunction with illuminators pulsing illumination patterns into the environment. A processor of the electronic device matches small sections of the depth images from the cameras to each other and to corresponding patches of immediately preceding depth images (e.g., a spatio-temporal image patch “cube”). The processor computes a matching cost for each spatio-temporal image patch cube by converting each spatio-temporal image patch into binary codes and defining a cost function between two stereo image patches as the difference between the binary codes. The processor minimizes the matching cost to generate a disparity map, and optimizes the disparity map by rejecting outliers using a decision tree with learned pixel offsets and refining subpixels to generate a depth map of the environment.Type: GrantFiled: May 31, 2018Date of Patent: November 17, 2020Assignee: GOOGLE LLCInventors: Adarsh Prakash Murthy Kowdle, Vladimir Tankovich, Danhang Tang, Cem Keskin, Jonathan James Taylor, Philip L. Davidson, Shahram Izadi, Sean Ryan Fanello, Julien Pascal Christophe Valentin, Christoph Rhemann, Mingsong Dou, Sameh Khamis, David Kim
-
Patent number: 10824226Abstract: An electronic device estimates a pose of a face by fitting a generative face model mesh to a depth map based on vertices of the face model mesh that are estimated to be visible from the point of view of a depth camera. A face tracking module of the electronic device receives a depth image of a face from a depth camera and generates a depth map of the face based on the depth image. The face tracking module identifies a pose of the face by fitting a face model mesh to the pixels of a depth map that correspond to the vertices of the face model mesh that are estimated to be visible from the point of view of the depth camera.Type: GrantFiled: June 7, 2018Date of Patent: November 3, 2020Assignee: Google LLCInventors: Julien Pascal Christophe Valentin, Jonathan James Taylor, Shahram Izadi
-
Publication number: 20200160109Abstract: Values of pixels in an image are mapped to a binary space using a first function that preserves characteristics of values of the pixels. Labels are iteratively assigned to the pixels in the image in parallel based on a second function. The label assigned to each pixel is determined based on values of a set of nearest-neighbor pixels. The first function is trained to map values of pixels in a set of training images to the binary space and the second function is trained to assign labels to the pixels in the set of training images. Considering only the nearest neighbors in the inference scheme results in a computational complexity that is independent of the size of the solution space and produces sufficient approximations of the true distribution when the solution for each pixel is most likely found in a small subset of the set of potential solutions.Type: ApplicationFiled: January 22, 2020Publication date: May 21, 2020Inventors: Sean Ryan FANELLO, Julien Pascal Christophe VALENTIN, Adarsh Prakash Murthy KOWDLE, Christoph RHEMANN, Vladimir TANKOVICH, Philip L. DAVIDSON, Shahram IZADI
-
Patent number: 10579905Abstract: Values of pixels in an image are mapped to a binary space using a first function that preserves characteristics of values of the pixels. Labels are iteratively assigned to the pixels in the image in parallel based on a second function. The label assigned to each pixel is determined based on values of a set of nearest-neighbor pixels. The first function is trained to map values of pixels in a set of training images to the binary space and the second function is trained to assign labels to the pixels in the set of training images. Considering only the nearest neighbors in the inference scheme results in a computational complexity that is independent of the size of the solution space and produces sufficient approximations of the true distribution when the solution for each pixel is most likely found in a small subset of the set of potential solutions.Type: GrantFiled: March 19, 2018Date of Patent: March 3, 2020Assignee: GOOGLE LLCInventors: Sean Ryan Fanello, Julien Pascal Christophe Valentin, Adarsh Prakash Murthy Kowdle, Christoph Rhemann, Vladimir Tankovich, Philip L. Davidson, Shahram Izadi
-
Patent number: 10554957Abstract: A first and second image of a scene are captured. Each of a plurality of pixels in the first image is associated with a disparity value. An image patch associated with each of the plurality of pixels of the first image and the second image is mapped into a binary vector. Thus, values of pixels in an image are mapped to a binary space using a function that preserves characteristics of values of the pixels. The difference between the binary vector associated with each of the plurality of pixels of the first image and its corresponding binary vector in the second image designated by the disparity value associated with each of the plurality of pixels of the first image is determined. Based on the determined difference between binary vectors, correspondence between the plurality of pixels of the first image and the second image is established.Type: GrantFiled: June 4, 2018Date of Patent: February 4, 2020Assignee: GOOGLE LLCInventors: Julien Pascal Christophe Valentin, Sean Ryan Fanello, Adarsh Prakash Murthy Kowdle, Christoph Rhemann, Vladimir Tankovich, Philip L. Davidson, Shahram Izadi
-
Publication number: 20180356883Abstract: An electronic device estimates a pose of a face by fitting a generative face model mesh to a depth map based on vertices of the face model mesh that are estimated to be visible from the point of view of a depth camera. A face tracking module of the electronic device receives a depth image of a face from a depth camera and generates a depth map of the face based on the depth image. The face tracking module identifies a pose of the face by fitting a face model mesh to the pixels of a depth map that correspond to the vertices of the face model mesh that are estimated to be visible from the point of view of the depth camera.Type: ApplicationFiled: June 7, 2018Publication date: December 13, 2018Inventors: Julien Pascal Christophe VALENTIN, Jonathan James TAYLOR, Shahram IZADI
-
Publication number: 20180352213Abstract: A first and second image of a scene are captured. Each of a plurality of pixels in the first image is associated with a disparity value. An image patch associated with each of the plurality of pixels of the first image and the second image is mapped into a binary vector. Thus, values of pixels in an image are mapped to a binary space using a function that preserves characteristics of values of the pixels. The difference between the binary vector associated with each of the plurality of pixels of the first image and its corresponding binary vector in the second image designated by the disparity value associated with each of the plurality of pixels of the first image is determined. Based on the determined difference between binary vectors, correspondence between the plurality of pixels of the first image and the second image is established.Type: ApplicationFiled: June 4, 2018Publication date: December 6, 2018Inventors: Julien Pascal Christophe Valentin, Sean Ryan Fanello, Adarsh Prakash Murthy Kowdle, Christoph Rhemann, Vladimir Tankovich, Philip L. Davidson, Shahram Izadi
-
Publication number: 20180350087Abstract: An electronic device estimates a depth map of an environment based on stereo depth images captured by depth cameras having exposure times that are offset from each other in conjunction with illuminators pulsing illumination patterns into the environment. A processor of the electronic device matches small sections of the depth images from the cameras to each other and to corresponding patches of immediately preceding depth images (e.g., a spatio-temporal image patch “cube”). The processor computes a matching cost for each spatio-temporal image patch cube by converting each spatio-temporal image patch into binary codes and defining a cost function between two stereo image patches as the difference between the binary codes. The processor minimizes the matching cost to generate a disparity map, and optimizes the disparity map by rejecting outliers using a decision tree with learned pixel offsets and refining subpixels to generate a depth map of the environment.Type: ApplicationFiled: May 31, 2018Publication date: December 6, 2018Inventors: Adarsh Prakash Murthy KOWDLE, Vladimir TANKOVICH, Danhang TANG, Cem KESKIN, Jonathan James Taylor, Philip L. DAVIDSON, Shahram IZADI, Sean Ryan FANELLO, Julien Pascal Christophe VALENTIN, Christoph RHEMANN, Mingsong DOU, Sameh KHAMIS, David KIM