Patents by Inventor Tobias B. Schmidt

Tobias B. Schmidt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11710247
    Abstract: Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.
    Type: Grant
    Filed: October 27, 2020
    Date of Patent: July 25, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
  • Publication number: 20220245870
    Abstract: Compositing is provided in which visual elements from different sources, including live action objects and computer graphic (CG) merged in a constant feed. Representative output images are produced during a live action shoot. The compositing system uses supplementary data, such as depth data of the live action objects for integration with CG items and light marker detection data for device calibration and performance capture. Varying capture times (e.g., exposure times) and processing times are tracked to align with corresponding incoming images and data.
    Type: Application
    Filed: April 20, 2022
    Publication date: August 4, 2022
    Applicant: Unity Technologies SF
    Inventors: Dejan Momcilovic, Erik B. Edlund, Tobias B. Schmidt
  • Patent number: 11335039
    Abstract: Compositing is provided in which visual elements from different sources, including live action objects and computer graphic (CG) merged in a constant feed. Representative output images are produced during a live action shoot. The compositing system uses supplementary data, such as depth data of the live action objects for integration with CG items and light marker detection data for device calibration and performance capture. Varying capture times (e.g., exposure times) and processing times are tracked to align with corresponding incoming images and data.
    Type: Grant
    Filed: October 8, 2021
    Date of Patent: May 17, 2022
    Assignee: Unity Technologies SF
    Inventors: Dejan Momcilovic, Erik B. Edlund, Tobias B. Schmidt
  • Patent number: 11328436
    Abstract: Embodiments allow camera effects, such as imaging noise, to be included in a generation of a synthetic data set for use in training an artificial intelligence model to produce an image depth map. The image depth map can then be employed to assist in compositing live action images from an image capture device with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: May 10, 2022
    Assignee: Unity Technologies SF
    Inventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
  • Publication number: 20220028132
    Abstract: Compositing is provided in which visual elements from different sources, including live action objects and computer graphic (CG) merged in a constant feed. Representative output images are produced during a live action shoot. The compositing system uses supplementary data, such as depth data of the live action objects for integration with CG items and light marker detection data for device calibration and performance capture. Varying capture times (e.g., exposure times) and processing times are tracked to align with corresponding incoming images and data.
    Type: Application
    Filed: October 8, 2021
    Publication date: January 27, 2022
    Applicant: Weta Digital Limited
    Inventors: Dejan Momcilovic, Erik B. Edlund, Tobias B. Schmidt
  • Publication number: 20210366138
    Abstract: Embodiments allow camera effects, such as imaging noise, to be included in a generation of a synthetic data set for use in training an artificial intelligence model to produce an image depth map. The image depth map can then be employed to assist in compositing live action images from an image capture device with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training.
    Type: Application
    Filed: August 6, 2021
    Publication date: November 25, 2021
    Applicant: Weta Digital Limited
    Inventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
  • Patent number: 11176716
    Abstract: Compositing is provided in which visual elements from different sources, including live action objects and computer graphic (CG) merged in a constant feed. Representative output images are produced during a live action shoot. The compositing system uses supplementary data, such as depth data of the live action objects for integration with CG items and light marker detection data for device calibration and performance capture. Varying capture times (e.g., exposure times) and processing times are tracked to align with corresponding incoming images and data.
    Type: Grant
    Filed: February 17, 2021
    Date of Patent: November 16, 2021
    Assignee: WETA DIGITAL LIMITED
    Inventors: Dejan Momcilovic, Erik B. Edlund, Tobias B. Schmidt
  • Patent number: 11158073
    Abstract: Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: October 26, 2021
    Assignee: WETA DIGITAL LIMITED
    Inventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
  • Publication number: 20210272334
    Abstract: Compositing is provided in which visual elements from different sources, including live action objects and computer graphic (CG) merged in a constant feed. Representative output images are produced during a live action shoot. The compositing system uses supplementary data, such as depth data of the live action objects for integration with CG items and light marker detection data for device calibration and performance capture. Varying capture times (e.g., exposure times) and processing times are tracked to align with corresponding incoming images and data.
    Type: Application
    Filed: February 17, 2021
    Publication date: September 2, 2021
    Applicant: Weta Digital Limited
    Inventors: Dejan Momcilvic, Erik B. Edlund, Tobias B. Schmidt
  • Publication number: 20210241474
    Abstract: Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.
    Type: Application
    Filed: December 23, 2020
    Publication date: August 5, 2021
    Applicant: Weta Digital Limited
    Inventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
  • Publication number: 20210241473
    Abstract: Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.
    Type: Application
    Filed: October 27, 2020
    Publication date: August 5, 2021
    Applicant: Weta Digital Limited
    Inventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave