Patents by Inventor Erik B. Edlund
Erik B. Edlund has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11710247Abstract: Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.Type: GrantFiled: October 27, 2020Date of Patent: July 25, 2023Assignee: UNITY TECHNOLOGIES SFInventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
-
Publication number: 20220245870Abstract: Compositing is provided in which visual elements from different sources, including live action objects and computer graphic (CG) merged in a constant feed. Representative output images are produced during a live action shoot. The compositing system uses supplementary data, such as depth data of the live action objects for integration with CG items and light marker detection data for device calibration and performance capture. Varying capture times (e.g., exposure times) and processing times are tracked to align with corresponding incoming images and data.Type: ApplicationFiled: April 20, 2022Publication date: August 4, 2022Applicant: Unity Technologies SFInventors: Dejan Momcilovic, Erik B. Edlund, Tobias B. Schmidt
-
Patent number: 11335039Abstract: Compositing is provided in which visual elements from different sources, including live action objects and computer graphic (CG) merged in a constant feed. Representative output images are produced during a live action shoot. The compositing system uses supplementary data, such as depth data of the live action objects for integration with CG items and light marker detection data for device calibration and performance capture. Varying capture times (e.g., exposure times) and processing times are tracked to align with corresponding incoming images and data.Type: GrantFiled: October 8, 2021Date of Patent: May 17, 2022Assignee: Unity Technologies SFInventors: Dejan Momcilovic, Erik B. Edlund, Tobias B. Schmidt
-
Patent number: 11328436Abstract: Embodiments allow camera effects, such as imaging noise, to be included in a generation of a synthetic data set for use in training an artificial intelligence model to produce an image depth map. The image depth map can then be employed to assist in compositing live action images from an image capture device with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training.Type: GrantFiled: August 6, 2021Date of Patent: May 10, 2022Assignee: Unity Technologies SFInventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
-
Publication number: 20220028132Abstract: Compositing is provided in which visual elements from different sources, including live action objects and computer graphic (CG) merged in a constant feed. Representative output images are produced during a live action shoot. The compositing system uses supplementary data, such as depth data of the live action objects for integration with CG items and light marker detection data for device calibration and performance capture. Varying capture times (e.g., exposure times) and processing times are tracked to align with corresponding incoming images and data.Type: ApplicationFiled: October 8, 2021Publication date: January 27, 2022Applicant: Weta Digital LimitedInventors: Dejan Momcilovic, Erik B. Edlund, Tobias B. Schmidt
-
Publication number: 20210366138Abstract: Embodiments allow camera effects, such as imaging noise, to be included in a generation of a synthetic data set for use in training an artificial intelligence model to produce an image depth map. The image depth map can then be employed to assist in compositing live action images from an image capture device with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training.Type: ApplicationFiled: August 6, 2021Publication date: November 25, 2021Applicant: Weta Digital LimitedInventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
-
Patent number: 11176716Abstract: Compositing is provided in which visual elements from different sources, including live action objects and computer graphic (CG) merged in a constant feed. Representative output images are produced during a live action shoot. The compositing system uses supplementary data, such as depth data of the live action objects for integration with CG items and light marker detection data for device calibration and performance capture. Varying capture times (e.g., exposure times) and processing times are tracked to align with corresponding incoming images and data.Type: GrantFiled: February 17, 2021Date of Patent: November 16, 2021Assignee: WETA DIGITAL LIMITEDInventors: Dejan Momcilovic, Erik B. Edlund, Tobias B. Schmidt
-
Patent number: 11158073Abstract: Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.Type: GrantFiled: December 23, 2020Date of Patent: October 26, 2021Assignee: WETA DIGITAL LIMITEDInventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
-
Publication number: 20210272334Abstract: Compositing is provided in which visual elements from different sources, including live action objects and computer graphic (CG) merged in a constant feed. Representative output images are produced during a live action shoot. The compositing system uses supplementary data, such as depth data of the live action objects for integration with CG items and light marker detection data for device calibration and performance capture. Varying capture times (e.g., exposure times) and processing times are tracked to align with corresponding incoming images and data.Type: ApplicationFiled: February 17, 2021Publication date: September 2, 2021Applicant: Weta Digital LimitedInventors: Dejan Momcilvic, Erik B. Edlund, Tobias B. Schmidt
-
Publication number: 20210241474Abstract: Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.Type: ApplicationFiled: December 23, 2020Publication date: August 5, 2021Applicant: Weta Digital LimitedInventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
-
Publication number: 20210241473Abstract: Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.Type: ApplicationFiled: October 27, 2020Publication date: August 5, 2021Applicant: Weta Digital LimitedInventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave