Patents by Inventor Matthew Alastair Johnson

Matthew Alastair Johnson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240037829
    Abstract: To compute an image of a dynamic 3D scene comprising a 3D object, a description of a deformation of the 3D object is received, the description comprising a cage of primitive 3D elements and associated animation data from a physics engine or an articulated object model. For a pixel of the image the method computes a ray from a virtual camera through the pixel into the cage animated according to the animation data and computes a plurality of samples on the ray. Each sample is a 3D position and view direction in one of the 3D elements. The method computes a transformation of the samples into a canonical cage. For each transformed sample, the method queries a learnt radiance field parameterization of the 3D scene to obtain a color value and an opacity value. A volume rendering method is applied to the color and opacity values producing a pixel value of the image.
    Type: Application
    Filed: September 19, 2022
    Publication date: February 1, 2024
    Inventors: Julien Pascal Christophe VALENTIN, Virginia ESTELLERS CASAS, Shideh REZAEIFAR, Jingjing SHEN, Stanislaw Kacper SZYMANOWICZ, Stephan Joachim GARBIN, Marek Adam KOWALSKI, Matthew Alastair JOHNSON
  • Publication number: 20230360309
    Abstract: In various examples there is a method of image processing comprising: storing a real image of an object in memory, the object being a specified type of object. The method involves computing, using a first encoder, a factorized embedding of the real image. The method receives a value of at least one parameter of a synthetic image rendering apparatus for rendering synthetic images of objects of the specified type. The parameter controls an attribute of synthetic images of objects rendered by the rendering apparatus. The method computes an embedding factor of the received value using a second encoder. The factorized embedding is modified with the computed embedding factor. The method computes, using a decoder with the modified embedding as input, an output image of an object which is substantially the same as the real image except for the attribute controlled by the parameter.
    Type: Application
    Filed: July 18, 2023
    Publication date: November 9, 2023
    Inventors: Marek Adam KOWALSKI, Stephan Joachim GARBIN, Matthew Alastair JOHNSON, Tadas BALTRUSAITIS, Martin DE LA GORCE, Virginia ESTELLERS CASAS, Sebastian Karol DZIADZIO
  • Publication number: 20230281863
    Abstract: Keypoints are predicted in an image. Predictions are generated for each of the keypoints of an image as a 2D random variable, normally distributed with location (x, y) and standard deviation sigma. A neural network is trained to maximize a log-likelihood that samples from each of the predicted keypoints equal a ground truth. The trained neural network is used to predict keypoints of an image without generating a heatmap.
    Type: Application
    Filed: June 28, 2022
    Publication date: September 7, 2023
    Inventors: Julien Pascal Christophe VALENTIN, Erroll William WOOD, Thomas Joseph CASHMAN, Martin de LA GORCE, Tadas BALTRUSAITIS, Daniel Stephen WILDE, Jingjing SHEN, Matthew Alastair JOHNSON, Charles Thomas HEWITT, Nikola MILOSAVLJEVIC, Stephan Joachim GARBIN, Toby SHARP, Ivan STOJILJKOVIC
  • Publication number: 20230281945
    Abstract: Keypoints are predicted in an image. A neural network is executed that is configured to predict each of the keypoints as a 2D random variable, normally distributed with a 2D position and 2×2 covariance matrix. The neural network is trained to maximize a log-likelihood that samples from each of the predicted keypoints equal a ground truth. The trained neural network is used to predict keypoints of an image without generating a heatmap.
    Type: Application
    Filed: June 28, 2022
    Publication date: September 7, 2023
    Inventors: Thomas Joseph CASHMAN, Erroll William WOOD, Martin DE LA GORCE, Tadas BALTRUSAITIS, Daniel Stephen WILDE, Jingjing SHEN, Matthew Alastair JOHNSON, Julien Pascal Christophe VALENTIN
  • Patent number: 11748932
    Abstract: In various examples there is a method of image processing comprising: storing a real image of an object in memory, the object being a specified type of object. The method involves computing, using a first encoder, a factorized embedding of the real image. The method receives a value of at least one parameter of a synthetic image rendering apparatus for rendering synthetic images of objects of the specified type. The parameter controls an attribute of synthetic images of objects rendered by the rendering apparatus. The method computes an embedding factor of the received value using a second encoder. The factorized embedding is modified with the computed embedding factor. The method computes, using a decoder with the modified embedding as input, an output image of an object which is substantially the same as the real image except for the attribute controlled by the parameter.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: September 5, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Marek Adam Kowalski, Stephan Joachim Garbin, Matthew Alastair Johnson, Tadas Baltrusaitis, Martin De la Gorce, Virginia Estellers Casas, Sebastian Karol Dziadzio
  • Patent number: 11640690
    Abstract: Methods and systems are provided for training a machine learning model to generate density values and radiance components based on positional data, along with a weighting scheme associated with a particular view direction based on directional data to compute a final RGB value for each point along a plurality of camera rays. The positional data and directional data are extracted from set of training images of a particular static scene. The radiance components, density values, and weighting schemes are cached for efficient image data processing to perform volume rendering for each point sampled. A novel viewpoint of a static scene is generated based on the volume rendering for each point sampled.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: May 2, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephan Joachim Garbin, Marek Adam Kowalski, Matthew Alastair Johnson
  • Publication number: 20230116250
    Abstract: Computing an output image of a dynamic scene. A value of E is selected which is a parameter describing desired dynamic content of the scene in the output image. Using selected intrinsic camera parameters and a selected viewpoint, for individual pixels of the output image to be generated, the method computes a ray that goes from a virtual camera through the pixel into the dynamic scene. For individual ones of the rays, sample at least one point along the ray. For individual ones of the sampled points, a viewing direction being a direction of the corresponding ray, and E, query a machine learning model to produce colour and opacity values at the sampled point with the dynamic content of the scene as specified by E. For individual ones of the rays, apply a volume rendering method to the colour and opacity values computed along that ray, to produce a pixel value of the output image.
    Type: Application
    Filed: December 13, 2022
    Publication date: April 13, 2023
    Inventors: Marek Adam KOWALSKI, Matthew Alastair JOHNSON, Jamie Daniel Joseph SHOTTON
  • Patent number: 11551405
    Abstract: Computing an output image of a dynamic scene. A value of E is selected which is a parameter describing desired dynamic content of the scene in the output image. Using selected intrinsic camera parameters and a selected viewpoint, for individual pixels of the output image to be generated, the method computes a ray that goes from a virtual camera through the pixel into the dynamic scene. For individual ones of the rays, sample at least one point along the ray. For individual ones of the sampled points, a viewing direction being a direction of the corresponding ray, and E, query a machine learning model to produce colour and opacity values at the sampled point with the dynamic content of the scene as specified by E. For individual ones of the rays, apply a volume rendering method to the colour and opacity values computed along that ray, to produce a pixel value of the output image.
    Type: Grant
    Filed: July 13, 2020
    Date of Patent: January 10, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Marek Adam Kowalski, Matthew Alastair Johnson, Jamie Daniel Joseph Shotton
  • Publication number: 20220301257
    Abstract: Methods and systems are provided for training a machine learning model to generate density values and radiance components based on positional data, along with a weighting scheme associated with a particular view direction based on directional data to compute a final RGB value for each point along a plurality of camera rays. The positional data and directional data are extracted from set of training images of a particular static scene. The radiance components, density values, and weighting schemes are cached for efficient image data processing to perform volume rendering for each point sampled. A novel viewpoint of a static scene is generated based on the volume rendering for each point sampled.
    Type: Application
    Filed: May 17, 2021
    Publication date: September 22, 2022
    Inventors: Stephan Joachim GARBIN, Marek Adam KOWALSKI, Matthew Alastair JOHNSON
  • Publication number: 20220284655
    Abstract: There is a region of interest of a synthetic image depicting an object from a class of objects. A trained neural image generator, having been trained to map embeddings from a latent space to photorealistic images of objects in the class, is accessed. A first embedding is computed from the latent space, the first embedding corresponding to an image which is similar to the region of interest while maintaining photorealistic appearance. A second embedding is computed from the latent space, the second embedding corresponding to an image which matches the synthetic image. Blending of the first embedding and the second embedding is done to form a blended embedding. At least one output image is generated from the blended embedding, the output image being more photorealistic than the synthetic image.
    Type: Application
    Filed: May 23, 2022
    Publication date: September 8, 2022
    Inventors: Stephan Joachim GARBIN, Marek Adam KOWALSKI, Matthew Alastair JOHNSON, Tadas BALTRUSAITIS, Martin DE LA GORCE, Virginia ESTELLERS CASAS, Sebastian Karol DZIADZIO, Jamie Daniel Joseph SHOTTON
  • Publication number: 20220222531
    Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.
    Type: Application
    Filed: March 28, 2022
    Publication date: July 14, 2022
    Inventors: Ryota TOMIOKA, Matthew Alastair JOHNSON, Daniel Stefan TARLOW, Samuel Alexander WEBSTER, Dimitrios VYTINIOTIS, Alexander Lloyd GAUNT, Maik RIECHERT
  • Patent number: 11354846
    Abstract: There is a region of interest of a synthetic image depicting an object from a class of objects. A trained neural image generator, having been trained to map embeddings from a latent space to photorealistic images of objects in the class, is accessed. A first embedding is computed from the latent space, the first embedding corresponding to an image which is similar to the region of interest while maintaining photorealistic appearance. A second embedding is computed from the latent space, the second embedding corresponding to an image which matches the synthetic image. Blending of the first embedding and the second embedding is done to form a blended embedding. At least one output image is generated from the blended embedding, the output image being more photorealistic than the synthetic image.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: June 7, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephan Joachim Garbin, Marek Adam Kowalski, Matthew Alastair Johnson, Tadas Baltrusaitis, Martin De La Gorce, Virginia Estellers Casas, Sebastian Karol Dziadzio, Jamie Daniel Joseph Shotton
  • Patent number: 11288575
    Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: March 29, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ryota Tomioka, Matthew Alastair Johnson, Daniel Stefan Tarlow, Samuel Alexander Webster, Dimitrios Vytiniotis, Alexander Lloyd Gaunt, Maik Riechert
  • Publication number: 20210390767
    Abstract: In various examples there is an apparatus for computing an image depicting a face of a wearer of a head mounted display (HMD), as if the wearer was not wearing the HMD. An input image depicts a partial view of the wearer's face captured from at least one face facing capture device in the HMD. A machine learning apparatus is available which has been trained to compute expression parameters from the input image. A 3D face model that has expressions parameters is accessible as well as a photorealiser being a machine learning model trained to map images rendered from the 3D face model to photorealistic images. The apparatus computes expression parameter values from the image using the machine learning apparatus. The apparatus drives the 3D face model with the expression parameter values to produce a 3D model of the face of the wearer and then renders the 3D model from a specified viewpoint to compute a rendered image. The rendered image is upgraded to a photorealistic image using the photorealiser.
    Type: Application
    Filed: June 11, 2020
    Publication date: December 16, 2021
    Inventors: Matthew Alastair JOHNSON, Marta Malgorzata WILCZKOWIAK, Daniel Stephen WILDE, Paul Malcolm MCILROY, Tadas BALTRUSAITIS, Virginia ESTELLERS CASAS, Marek Adam KOWALSKI, Christopher Maurice MEI, Stephan Joachim GARBIN
  • Publication number: 20210390761
    Abstract: Computing an output image of a dynamic scene. A value of E is selected which is a parameter describing desired dynamic content of the scene in the output image. Using selected intrinsic camera parameters and a selected viewpoint, for individual pixels of the output image to be generated, the method computes a ray that goes from a virtual camera through the pixel into the dynamic scene. For individual ones of the rays, sample at least one point along the ray. For individual ones of the sampled points, a viewing direction being a direction of the corresponding ray, and E, query a machine learning model to produce colour and opacity values at the sampled point with the dynamic content of the scene as specified by E. For individual ones of the rays, apply a volume rendering method to the colour and opacity values computed along that ray, to produce a pixel value of the output image.
    Type: Application
    Filed: July 13, 2020
    Publication date: December 16, 2021
    Inventors: Marek Adam KOWALSKI, Matthew Alastair JOHNSON, Jamie Daniel Joseph SHOTTON
  • Publication number: 20210343063
    Abstract: There is a region of interest of a synthetic image depicting an object from a class of objects. A trained neural image generator, having been trained to map embeddings from a latent space to photorealistic images of objects in the class, is accessed. A first embedding is computed from the latent space, the first embedding corresponding to an image which is similar to the region of interest while maintaining photorealistic appearance. A second embedding is computed from the latent space, the second embedding corresponding to an image which matches the synthetic image. Blending of the first embedding and the second embedding is done to form a blended embedding. At least one output image is generated from the blended embedding, the output image being more photorealistic than the synthetic image.
    Type: Application
    Filed: June 29, 2020
    Publication date: November 4, 2021
    Inventors: Stephan Joachim GARBIN, Marek Adam KOWALSKI, Matthew Alastair JOHNSON, Tadas BALTRUSAITIS, Martin DE LA GORCE, Virginia ESTELLERS CASAS, Sebastian Karol DZIADZIO, Jamie Daniel Joseph SHOTTON
  • Publication number: 20210335029
    Abstract: In various examples there is a method of image processing comprising: storing a real image of an object in memory, the object being a specified type of object. The method involves computing, using a first encoder, a factorized embedding of the real image. The method receives a value of at least one parameter of a synthetic image rendering apparatus for rendering synthetic images of objects of the specified type. The parameter controls an attribute of synthetic images of objects rendered by the rendering apparatus. The method computes an embedding factor of the received value using a second encoder. The factorized embedding is modified with the computed embedding factor. The method computes, using a decoder with the modified embedding as input, an output image of an object which is substantially the same as the real image except for the attribute controlled by the parameter.
    Type: Application
    Filed: June 29, 2020
    Publication date: October 28, 2021
    Inventors: Marek Adam KOWALSKI, Stephan Joachim GARBIN, Matthew Alastair JOHNSON, Tadas BALTRUSAITIS, Martin DE LA GORCE, Virginia ESTELLERS CASAS, Sebastian Karol DZIADZIO
  • Patent number: 11127225
    Abstract: A method of fitting a three dimensional (3D) model to input data is described. Input data comprises a 3D scan and associated appearance information. The 3D scan depicts a composite object having elements from at least two classes. A texture model is available which, given an input vector, computes, for each of the classes, a texture and a mask. A joint optimization is computed to find values of the input vector and values of parameters of the 3D model, where the optimization enforces that the 3D model, instantiated by the values of the parameters, gives a simulated texture which agrees with the input data in a region specified by the mask associated with the 3D model; such that the 3D model is fitted to the input data.
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: September 21, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Marek Adam Kowalski, Virginia Estellers Casas, Thomas Joseph Cashman, Charles Thomas Hewitt, Matthew Alastair Johnson, Tadas Baltru{hacek over (s)}aitis
  • Patent number: 10462421
    Abstract: A projection unit has a rotating capture module and a rotating projection module. The capture module has at least one color camera, at least one microphone and at least one depth camera and is configured to capture images of an environment. The rotating projection module is configured to project images onto at least one surface in the environment. The projection unit has a processor configured to use data captured by the rotating capture module to select the at least one surface in the environment, the selection being dependent on a field of view of at least one user in the environment and dependent on characteristics of surfaces in the environment. The processor is configured to control rotation of the rotating capture module such that the data captured by the rotating capture module is suitable for computing field of view the user, and for determining characteristics of the surfaces.
    Type: Grant
    Filed: July 20, 2015
    Date of Patent: October 29, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Edward Sean Lloyd Rintel, Matthew Alastair Johnson
  • Publication number: 20180336458
    Abstract: A neural network training apparatus is described which has a network of worker nodes each having a memory storing a subgraph of a neural network to be trained. The apparatus has a control node connected to the network of worker nodes. The control node is configured to send training data instances into the network to trigger parallelized message passing operations which implement a training algorithm which trains the neural network. At least some of the message passing operations asynchronously update parameters of individual subgraphs of the neural network at the individual worker nodes.
    Type: Application
    Filed: May 18, 2017
    Publication date: November 22, 2018
    Inventors: Ryota TOMIOKA, Matthew Alastair JOHNSON, Daniel Stefan TARLOW, Samuel Alexander WEBSTER, Dimitrios VYTINIOTIS, Alexander Lloyd GAUNT, Maik RIECHERT