Patents by Inventor Josh Hardgrave

Josh Hardgrave has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11710247
    Abstract: Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.
    Type: Grant
    Filed: October 27, 2020
    Date of Patent: July 25, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
  • Patent number: 11328436
    Abstract: Embodiments allow camera effects, such as imaging noise, to be included in a generation of a synthetic data set for use in training an artificial intelligence model to produce an image depth map. The image depth map can then be employed to assist in compositing live action images from an image capture device with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: May 10, 2022
    Assignee: Unity Technologies SF
    Inventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
  • Publication number: 20210366138
    Abstract: Embodiments allow camera effects, such as imaging noise, to be included in a generation of a synthetic data set for use in training an artificial intelligence model to produce an image depth map. The image depth map can then be employed to assist in compositing live action images from an image capture device with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training.
    Type: Application
    Filed: August 6, 2021
    Publication date: November 25, 2021
    Applicant: Weta Digital Limited
    Inventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
  • Patent number: 11158073
    Abstract: Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: October 26, 2021
    Assignee: WETA DIGITAL LIMITED
    Inventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
  • Publication number: 20210241473
    Abstract: Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.
    Type: Application
    Filed: October 27, 2020
    Publication date: August 5, 2021
    Applicant: Weta Digital Limited
    Inventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
  • Publication number: 20210241474
    Abstract: Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.
    Type: Application
    Filed: December 23, 2020
    Publication date: August 5, 2021
    Applicant: Weta Digital Limited
    Inventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave