Patents by Inventor Stephan Joachim GARBIN
Stephan Joachim GARBIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240320498Abstract: There is a region of interest of a synthetic image depicting an object from a class of objects. A trained neural image generator, having been trained to map embeddings from a latent space to photorealistic images of objects in the class, is accessed. A first embedding is computed from the latent space, the first embedding corresponding to an image which is similar to the region of interest while maintaining photorealistic appearance. A second embedding is computed from the latent space, the second embedding corresponding to an image which matches the synthetic image. Blending of the first embedding and the second embedding is done to form a blended embedding. At least one output image is generated from the blended embedding, the output image being more photorealistic than the synthetic image.Type: ApplicationFiled: May 23, 2024Publication date: September 26, 2024Inventors: Stephan Joachim GARBIN, Marek Adam KOWALSKI, Matthew Alastair JOHNSON, Tadas BALTRUSAITIS, Martin DE LA GORCE, Virginia ESTELLERS CASAS, Sebastian Karol DZIADZIO, Jamie Daniel Joseph SHOTTON
-
Publication number: 20240265610Abstract: A cage of primitive 3D elements and associated animation data is received. Compute a ray from a virtual camera through a pixel into the cage animated according to the animation data and compute a plurality of samples on the ray. Compute a transformation of the samples into a canonical cage. For each transformed sample, query a plurality of learnt radiance field parameterizations, each learnt on a different deformed state of the 3D scene to obtain color values for each learnt radiance field. For each transformed sample, query a learnt radiance field parameterization of the 3D scene to obtain an opacity value. Compute, for each transformed sample, a weighted combination of the color values, wherein the weights are related to the local features. A volume rendering method is applied to the weighted combinations of the color and the opacity values producing a pixel value.Type: ApplicationFiled: February 3, 2023Publication date: August 8, 2024Inventors: Marek Adam KOWALSKI, Stephan Joachim GARBIN, Virginia ESTELLERS CASAS, Julien Pascal Christophe VALENTIN, Kacper KANIA
-
Patent number: 12045925Abstract: In various examples there is an apparatus for computing an image depicting a face of a wearer of a head mounted display (HMD), as if the wearer was not wearing the HMD. An input image depicts a partial view of the wearer's face captured from at least one face facing capture device in the HMD. A machine learning apparatus is available which has been trained to compute expression parameters from the input image. A 3D face model that has expressions parameters is accessible as well as a photorealiser being a machine learning model trained to map images rendered from the 3D face model to photorealistic images. The apparatus computes expression parameter values from the image using the machine learning apparatus. The apparatus drives the 3D face model with the expression parameter values to produce a 3D model of the face of the wearer and then renders the 3D model from a specified viewpoint to compute a rendered image. The rendered image is upgraded to a photorealistic image using the photorealiser.Type: GrantFiled: June 11, 2020Date of Patent: July 23, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Matthew Alastair Johnson, Marta Malgorzata Wilczkowiak, Daniel Stephen Wilde, Paul Malcolm McIlroy, Tadas Baltrusaitis, Virginia Estellers Casas, Marek Adam Kowalski, Christopher Maurice Mei, Stephan Joachim Garbin
-
Patent number: 12033084Abstract: There is a region of interest of a synthetic image depicting an object from a class of objects. A trained neural image generator, having been trained to map embeddings from a latent space to photorealistic images of objects in the class, is accessed. A first embedding is computed from the latent space, the first embedding corresponding to an image which is similar to the region of interest while maintaining photorealistic appearance. A second embedding is computed from the latent space, the second embedding corresponding to an image which matches the synthetic image. Blending of the first embedding and the second embedding is done to form a blended embedding. At least one output image is generated from the blended embedding, the output image being more photorealistic than the synthetic image.Type: GrantFiled: May 23, 2022Date of Patent: July 9, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Stephan Joachim Garbin, Marek Adam Kowalski, Matthew Alastair Johnson, Tadas Baltrusaitis, Martin De La Gorce, Virginia Estellers Casas, Sebastian Karol Dziadzio, Jamie Daniel Joseph Shotton
-
Publication number: 20240037829Abstract: To compute an image of a dynamic 3D scene comprising a 3D object, a description of a deformation of the 3D object is received, the description comprising a cage of primitive 3D elements and associated animation data from a physics engine or an articulated object model. For a pixel of the image the method computes a ray from a virtual camera through the pixel into the cage animated according to the animation data and computes a plurality of samples on the ray. Each sample is a 3D position and view direction in one of the 3D elements. The method computes a transformation of the samples into a canonical cage. For each transformed sample, the method queries a learnt radiance field parameterization of the 3D scene to obtain a color value and an opacity value. A volume rendering method is applied to the color and opacity values producing a pixel value of the image.Type: ApplicationFiled: September 19, 2022Publication date: February 1, 2024Inventors: Julien Pascal Christophe VALENTIN, Virginia ESTELLERS CASAS, Shideh REZAEIFAR, Jingjing SHEN, Stanislaw Kacper SZYMANOWICZ, Stephan Joachim GARBIN, Marek Adam KOWALSKI, Matthew Alastair JOHNSON
-
Publication number: 20230360309Abstract: In various examples there is a method of image processing comprising: storing a real image of an object in memory, the object being a specified type of object. The method involves computing, using a first encoder, a factorized embedding of the real image. The method receives a value of at least one parameter of a synthetic image rendering apparatus for rendering synthetic images of objects of the specified type. The parameter controls an attribute of synthetic images of objects rendered by the rendering apparatus. The method computes an embedding factor of the received value using a second encoder. The factorized embedding is modified with the computed embedding factor. The method computes, using a decoder with the modified embedding as input, an output image of an object which is substantially the same as the real image except for the attribute controlled by the parameter.Type: ApplicationFiled: July 18, 2023Publication date: November 9, 2023Inventors: Marek Adam KOWALSKI, Stephan Joachim GARBIN, Matthew Alastair JOHNSON, Tadas BALTRUSAITIS, Martin DE LA GORCE, Virginia ESTELLERS CASAS, Sebastian Karol DZIADZIO
-
Publication number: 20230281863Abstract: Keypoints are predicted in an image. Predictions are generated for each of the keypoints of an image as a 2D random variable, normally distributed with location (x, y) and standard deviation sigma. A neural network is trained to maximize a log-likelihood that samples from each of the predicted keypoints equal a ground truth. The trained neural network is used to predict keypoints of an image without generating a heatmap.Type: ApplicationFiled: June 28, 2022Publication date: September 7, 2023Inventors: Julien Pascal Christophe VALENTIN, Erroll William WOOD, Thomas Joseph CASHMAN, Martin de LA GORCE, Tadas BALTRUSAITIS, Daniel Stephen WILDE, Jingjing SHEN, Matthew Alastair JOHNSON, Charles Thomas HEWITT, Nikola MILOSAVLJEVIC, Stephan Joachim GARBIN, Toby SHARP, Ivan STOJILJKOVIC
-
Patent number: 11748932Abstract: In various examples there is a method of image processing comprising: storing a real image of an object in memory, the object being a specified type of object. The method involves computing, using a first encoder, a factorized embedding of the real image. The method receives a value of at least one parameter of a synthetic image rendering apparatus for rendering synthetic images of objects of the specified type. The parameter controls an attribute of synthetic images of objects rendered by the rendering apparatus. The method computes an embedding factor of the received value using a second encoder. The factorized embedding is modified with the computed embedding factor. The method computes, using a decoder with the modified embedding as input, an output image of an object which is substantially the same as the real image except for the attribute controlled by the parameter.Type: GrantFiled: June 29, 2020Date of Patent: September 5, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Marek Adam Kowalski, Stephan Joachim Garbin, Matthew Alastair Johnson, Tadas Baltrusaitis, Martin De la Gorce, Virginia Estellers Casas, Sebastian Karol Dziadzio
-
Patent number: 11640690Abstract: Methods and systems are provided for training a machine learning model to generate density values and radiance components based on positional data, along with a weighting scheme associated with a particular view direction based on directional data to compute a final RGB value for each point along a plurality of camera rays. The positional data and directional data are extracted from set of training images of a particular static scene. The radiance components, density values, and weighting schemes are cached for efficient image data processing to perform volume rendering for each point sampled. A novel viewpoint of a static scene is generated based on the volume rendering for each point sampled.Type: GrantFiled: May 17, 2021Date of Patent: May 2, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Stephan Joachim Garbin, Marek Adam Kowalski, Matthew Alastair Johnson
-
Publication number: 20220301257Abstract: Methods and systems are provided for training a machine learning model to generate density values and radiance components based on positional data, along with a weighting scheme associated with a particular view direction based on directional data to compute a final RGB value for each point along a plurality of camera rays. The positional data and directional data are extracted from set of training images of a particular static scene. The radiance components, density values, and weighting schemes are cached for efficient image data processing to perform volume rendering for each point sampled. A novel viewpoint of a static scene is generated based on the volume rendering for each point sampled.Type: ApplicationFiled: May 17, 2021Publication date: September 22, 2022Inventors: Stephan Joachim GARBIN, Marek Adam KOWALSKI, Matthew Alastair JOHNSON
-
Publication number: 20220284655Abstract: There is a region of interest of a synthetic image depicting an object from a class of objects. A trained neural image generator, having been trained to map embeddings from a latent space to photorealistic images of objects in the class, is accessed. A first embedding is computed from the latent space, the first embedding corresponding to an image which is similar to the region of interest while maintaining photorealistic appearance. A second embedding is computed from the latent space, the second embedding corresponding to an image which matches the synthetic image. Blending of the first embedding and the second embedding is done to form a blended embedding. At least one output image is generated from the blended embedding, the output image being more photorealistic than the synthetic image.Type: ApplicationFiled: May 23, 2022Publication date: September 8, 2022Inventors: Stephan Joachim GARBIN, Marek Adam KOWALSKI, Matthew Alastair JOHNSON, Tadas BALTRUSAITIS, Martin DE LA GORCE, Virginia ESTELLERS CASAS, Sebastian Karol DZIADZIO, Jamie Daniel Joseph SHOTTON
-
Patent number: 11354846Abstract: There is a region of interest of a synthetic image depicting an object from a class of objects. A trained neural image generator, having been trained to map embeddings from a latent space to photorealistic images of objects in the class, is accessed. A first embedding is computed from the latent space, the first embedding corresponding to an image which is similar to the region of interest while maintaining photorealistic appearance. A second embedding is computed from the latent space, the second embedding corresponding to an image which matches the synthetic image. Blending of the first embedding and the second embedding is done to form a blended embedding. At least one output image is generated from the blended embedding, the output image being more photorealistic than the synthetic image.Type: GrantFiled: June 29, 2020Date of Patent: June 7, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Stephan Joachim Garbin, Marek Adam Kowalski, Matthew Alastair Johnson, Tadas Baltrusaitis, Martin De La Gorce, Virginia Estellers Casas, Sebastian Karol Dziadzio, Jamie Daniel Joseph Shotton
-
Publication number: 20210390767Abstract: In various examples there is an apparatus for computing an image depicting a face of a wearer of a head mounted display (HMD), as if the wearer was not wearing the HMD. An input image depicts a partial view of the wearer's face captured from at least one face facing capture device in the HMD. A machine learning apparatus is available which has been trained to compute expression parameters from the input image. A 3D face model that has expressions parameters is accessible as well as a photorealiser being a machine learning model trained to map images rendered from the 3D face model to photorealistic images. The apparatus computes expression parameter values from the image using the machine learning apparatus. The apparatus drives the 3D face model with the expression parameter values to produce a 3D model of the face of the wearer and then renders the 3D model from a specified viewpoint to compute a rendered image. The rendered image is upgraded to a photorealistic image using the photorealiser.Type: ApplicationFiled: June 11, 2020Publication date: December 16, 2021Inventors: Matthew Alastair JOHNSON, Marta Malgorzata WILCZKOWIAK, Daniel Stephen WILDE, Paul Malcolm MCILROY, Tadas BALTRUSAITIS, Virginia ESTELLERS CASAS, Marek Adam KOWALSKI, Christopher Maurice MEI, Stephan Joachim GARBIN
-
Publication number: 20210343063Abstract: There is a region of interest of a synthetic image depicting an object from a class of objects. A trained neural image generator, having been trained to map embeddings from a latent space to photorealistic images of objects in the class, is accessed. A first embedding is computed from the latent space, the first embedding corresponding to an image which is similar to the region of interest while maintaining photorealistic appearance. A second embedding is computed from the latent space, the second embedding corresponding to an image which matches the synthetic image. Blending of the first embedding and the second embedding is done to form a blended embedding. At least one output image is generated from the blended embedding, the output image being more photorealistic than the synthetic image.Type: ApplicationFiled: June 29, 2020Publication date: November 4, 2021Inventors: Stephan Joachim GARBIN, Marek Adam KOWALSKI, Matthew Alastair JOHNSON, Tadas BALTRUSAITIS, Martin DE LA GORCE, Virginia ESTELLERS CASAS, Sebastian Karol DZIADZIO, Jamie Daniel Joseph SHOTTON
-
Publication number: 20210335029Abstract: In various examples there is a method of image processing comprising: storing a real image of an object in memory, the object being a specified type of object. The method involves computing, using a first encoder, a factorized embedding of the real image. The method receives a value of at least one parameter of a synthetic image rendering apparatus for rendering synthetic images of objects of the specified type. The parameter controls an attribute of synthetic images of objects rendered by the rendering apparatus. The method computes an embedding factor of the received value using a second encoder. The factorized embedding is modified with the computed embedding factor. The method computes, using a decoder with the modified embedding as input, an output image of an object which is substantially the same as the real image except for the attribute controlled by the parameter.Type: ApplicationFiled: June 29, 2020Publication date: October 28, 2021Inventors: Marek Adam KOWALSKI, Stephan Joachim GARBIN, Matthew Alastair JOHNSON, Tadas BALTRUSAITIS, Martin DE LA GORCE, Virginia ESTELLERS CASAS, Sebastian Karol DZIADZIO