Patents by Inventor Marek Adam Kowalski
Marek Adam Kowalski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240037829Abstract: To compute an image of a dynamic 3D scene comprising a 3D object, a description of a deformation of the 3D object is received, the description comprising a cage of primitive 3D elements and associated animation data from a physics engine or an articulated object model. For a pixel of the image the method computes a ray from a virtual camera through the pixel into the cage animated according to the animation data and computes a plurality of samples on the ray. Each sample is a 3D position and view direction in one of the 3D elements. The method computes a transformation of the samples into a canonical cage. For each transformed sample, the method queries a learnt radiance field parameterization of the 3D scene to obtain a color value and an opacity value. A volume rendering method is applied to the color and opacity values producing a pixel value of the image.Type: ApplicationFiled: September 19, 2022Publication date: February 1, 2024Inventors: Julien Pascal Christophe VALENTIN, Virginia ESTELLERS CASAS, Shideh REZAEIFAR, Jingjing SHEN, Stanislaw Kacper SZYMANOWICZ, Stephan Joachim GARBIN, Marek Adam KOWALSKI, Matthew Alastair JOHNSON
-
Publication number: 20230360309Abstract: In various examples there is a method of image processing comprising: storing a real image of an object in memory, the object being a specified type of object. The method involves computing, using a first encoder, a factorized embedding of the real image. The method receives a value of at least one parameter of a synthetic image rendering apparatus for rendering synthetic images of objects of the specified type. The parameter controls an attribute of synthetic images of objects rendered by the rendering apparatus. The method computes an embedding factor of the received value using a second encoder. The factorized embedding is modified with the computed embedding factor. The method computes, using a decoder with the modified embedding as input, an output image of an object which is substantially the same as the real image except for the attribute controlled by the parameter.Type: ApplicationFiled: July 18, 2023Publication date: November 9, 2023Inventors: Marek Adam KOWALSKI, Stephan Joachim GARBIN, Matthew Alastair JOHNSON, Tadas BALTRUSAITIS, Martin DE LA GORCE, Virginia ESTELLERS CASAS, Sebastian Karol DZIADZIO
-
Patent number: 11748932Abstract: In various examples there is a method of image processing comprising: storing a real image of an object in memory, the object being a specified type of object. The method involves computing, using a first encoder, a factorized embedding of the real image. The method receives a value of at least one parameter of a synthetic image rendering apparatus for rendering synthetic images of objects of the specified type. The parameter controls an attribute of synthetic images of objects rendered by the rendering apparatus. The method computes an embedding factor of the received value using a second encoder. The factorized embedding is modified with the computed embedding factor. The method computes, using a decoder with the modified embedding as input, an output image of an object which is substantially the same as the real image except for the attribute controlled by the parameter.Type: GrantFiled: June 29, 2020Date of Patent: September 5, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Marek Adam Kowalski, Stephan Joachim Garbin, Matthew Alastair Johnson, Tadas Baltrusaitis, Martin De la Gorce, Virginia Estellers Casas, Sebastian Karol Dziadzio
-
Patent number: 11640690Abstract: Methods and systems are provided for training a machine learning model to generate density values and radiance components based on positional data, along with a weighting scheme associated with a particular view direction based on directional data to compute a final RGB value for each point along a plurality of camera rays. The positional data and directional data are extracted from set of training images of a particular static scene. The radiance components, density values, and weighting schemes are cached for efficient image data processing to perform volume rendering for each point sampled. A novel viewpoint of a static scene is generated based on the volume rendering for each point sampled.Type: GrantFiled: May 17, 2021Date of Patent: May 2, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Stephan Joachim Garbin, Marek Adam Kowalski, Matthew Alastair Johnson
-
Publication number: 20230116250Abstract: Computing an output image of a dynamic scene. A value of E is selected which is a parameter describing desired dynamic content of the scene in the output image. Using selected intrinsic camera parameters and a selected viewpoint, for individual pixels of the output image to be generated, the method computes a ray that goes from a virtual camera through the pixel into the dynamic scene. For individual ones of the rays, sample at least one point along the ray. For individual ones of the sampled points, a viewing direction being a direction of the corresponding ray, and E, query a machine learning model to produce colour and opacity values at the sampled point with the dynamic content of the scene as specified by E. For individual ones of the rays, apply a volume rendering method to the colour and opacity values computed along that ray, to produce a pixel value of the output image.Type: ApplicationFiled: December 13, 2022Publication date: April 13, 2023Inventors: Marek Adam KOWALSKI, Matthew Alastair JOHNSON, Jamie Daniel Joseph SHOTTON
-
Patent number: 11551405Abstract: Computing an output image of a dynamic scene. A value of E is selected which is a parameter describing desired dynamic content of the scene in the output image. Using selected intrinsic camera parameters and a selected viewpoint, for individual pixels of the output image to be generated, the method computes a ray that goes from a virtual camera through the pixel into the dynamic scene. For individual ones of the rays, sample at least one point along the ray. For individual ones of the sampled points, a viewing direction being a direction of the corresponding ray, and E, query a machine learning model to produce colour and opacity values at the sampled point with the dynamic content of the scene as specified by E. For individual ones of the rays, apply a volume rendering method to the colour and opacity values computed along that ray, to produce a pixel value of the output image.Type: GrantFiled: July 13, 2020Date of Patent: January 10, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Marek Adam Kowalski, Matthew Alastair Johnson, Jamie Daniel Joseph Shotton
-
Publication number: 20220301257Abstract: Methods and systems are provided for training a machine learning model to generate density values and radiance components based on positional data, along with a weighting scheme associated with a particular view direction based on directional data to compute a final RGB value for each point along a plurality of camera rays. The positional data and directional data are extracted from set of training images of a particular static scene. The radiance components, density values, and weighting schemes are cached for efficient image data processing to perform volume rendering for each point sampled. A novel viewpoint of a static scene is generated based on the volume rendering for each point sampled.Type: ApplicationFiled: May 17, 2021Publication date: September 22, 2022Inventors: Stephan Joachim GARBIN, Marek Adam KOWALSKI, Matthew Alastair JOHNSON
-
Publication number: 20220284655Abstract: There is a region of interest of a synthetic image depicting an object from a class of objects. A trained neural image generator, having been trained to map embeddings from a latent space to photorealistic images of objects in the class, is accessed. A first embedding is computed from the latent space, the first embedding corresponding to an image which is similar to the region of interest while maintaining photorealistic appearance. A second embedding is computed from the latent space, the second embedding corresponding to an image which matches the synthetic image. Blending of the first embedding and the second embedding is done to form a blended embedding. At least one output image is generated from the blended embedding, the output image being more photorealistic than the synthetic image.Type: ApplicationFiled: May 23, 2022Publication date: September 8, 2022Inventors: Stephan Joachim GARBIN, Marek Adam KOWALSKI, Matthew Alastair JOHNSON, Tadas BALTRUSAITIS, Martin DE LA GORCE, Virginia ESTELLERS CASAS, Sebastian Karol DZIADZIO, Jamie Daniel Joseph SHOTTON
-
Patent number: 11354846Abstract: There is a region of interest of a synthetic image depicting an object from a class of objects. A trained neural image generator, having been trained to map embeddings from a latent space to photorealistic images of objects in the class, is accessed. A first embedding is computed from the latent space, the first embedding corresponding to an image which is similar to the region of interest while maintaining photorealistic appearance. A second embedding is computed from the latent space, the second embedding corresponding to an image which matches the synthetic image. Blending of the first embedding and the second embedding is done to form a blended embedding. At least one output image is generated from the blended embedding, the output image being more photorealistic than the synthetic image.Type: GrantFiled: June 29, 2020Date of Patent: June 7, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Stephan Joachim Garbin, Marek Adam Kowalski, Matthew Alastair Johnson, Tadas Baltrusaitis, Martin De La Gorce, Virginia Estellers Casas, Sebastian Karol Dziadzio, Jamie Daniel Joseph Shotton
-
Publication number: 20210390767Abstract: In various examples there is an apparatus for computing an image depicting a face of a wearer of a head mounted display (HMD), as if the wearer was not wearing the HMD. An input image depicts a partial view of the wearer's face captured from at least one face facing capture device in the HMD. A machine learning apparatus is available which has been trained to compute expression parameters from the input image. A 3D face model that has expressions parameters is accessible as well as a photorealiser being a machine learning model trained to map images rendered from the 3D face model to photorealistic images. The apparatus computes expression parameter values from the image using the machine learning apparatus. The apparatus drives the 3D face model with the expression parameter values to produce a 3D model of the face of the wearer and then renders the 3D model from a specified viewpoint to compute a rendered image. The rendered image is upgraded to a photorealistic image using the photorealiser.Type: ApplicationFiled: June 11, 2020Publication date: December 16, 2021Inventors: Matthew Alastair JOHNSON, Marta Malgorzata WILCZKOWIAK, Daniel Stephen WILDE, Paul Malcolm MCILROY, Tadas BALTRUSAITIS, Virginia ESTELLERS CASAS, Marek Adam KOWALSKI, Christopher Maurice MEI, Stephan Joachim GARBIN
-
Publication number: 20210390761Abstract: Computing an output image of a dynamic scene. A value of E is selected which is a parameter describing desired dynamic content of the scene in the output image. Using selected intrinsic camera parameters and a selected viewpoint, for individual pixels of the output image to be generated, the method computes a ray that goes from a virtual camera through the pixel into the dynamic scene. For individual ones of the rays, sample at least one point along the ray. For individual ones of the sampled points, a viewing direction being a direction of the corresponding ray, and E, query a machine learning model to produce colour and opacity values at the sampled point with the dynamic content of the scene as specified by E. For individual ones of the rays, apply a volume rendering method to the colour and opacity values computed along that ray, to produce a pixel value of the output image.Type: ApplicationFiled: July 13, 2020Publication date: December 16, 2021Inventors: Marek Adam KOWALSKI, Matthew Alastair JOHNSON, Jamie Daniel Joseph SHOTTON
-
Publication number: 20210343063Abstract: There is a region of interest of a synthetic image depicting an object from a class of objects. A trained neural image generator, having been trained to map embeddings from a latent space to photorealistic images of objects in the class, is accessed. A first embedding is computed from the latent space, the first embedding corresponding to an image which is similar to the region of interest while maintaining photorealistic appearance. A second embedding is computed from the latent space, the second embedding corresponding to an image which matches the synthetic image. Blending of the first embedding and the second embedding is done to form a blended embedding. At least one output image is generated from the blended embedding, the output image being more photorealistic than the synthetic image.Type: ApplicationFiled: June 29, 2020Publication date: November 4, 2021Inventors: Stephan Joachim GARBIN, Marek Adam KOWALSKI, Matthew Alastair JOHNSON, Tadas BALTRUSAITIS, Martin DE LA GORCE, Virginia ESTELLERS CASAS, Sebastian Karol DZIADZIO, Jamie Daniel Joseph SHOTTON
-
Publication number: 20210335029Abstract: In various examples there is a method of image processing comprising: storing a real image of an object in memory, the object being a specified type of object. The method involves computing, using a first encoder, a factorized embedding of the real image. The method receives a value of at least one parameter of a synthetic image rendering apparatus for rendering synthetic images of objects of the specified type. The parameter controls an attribute of synthetic images of objects rendered by the rendering apparatus. The method computes an embedding factor of the received value using a second encoder. The factorized embedding is modified with the computed embedding factor. The method computes, using a decoder with the modified embedding as input, an output image of an object which is substantially the same as the real image except for the attribute controlled by the parameter.Type: ApplicationFiled: June 29, 2020Publication date: October 28, 2021Inventors: Marek Adam KOWALSKI, Stephan Joachim GARBIN, Matthew Alastair JOHNSON, Tadas BALTRUSAITIS, Martin DE LA GORCE, Virginia ESTELLERS CASAS, Sebastian Karol DZIADZIO
-
Patent number: 11127225Abstract: A method of fitting a three dimensional (3D) model to input data is described. Input data comprises a 3D scan and associated appearance information. The 3D scan depicts a composite object having elements from at least two classes. A texture model is available which, given an input vector, computes, for each of the classes, a texture and a mask. A joint optimization is computed to find values of the input vector and values of parameters of the 3D model, where the optimization enforces that the 3D model, instantiated by the values of the parameters, gives a simulated texture which agrees with the input data in a region specified by the mask associated with the 3D model; such that the 3D model is fitted to the input data.Type: GrantFiled: June 1, 2020Date of Patent: September 21, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Marek Adam Kowalski, Virginia Estellers Casas, Thomas Joseph Cashman, Charles Thomas Hewitt, Matthew Alastair Johnson, Tadas Baltru{hacek over (s)}aitis