Patents Assigned to Weta Digital Limited
-
Patent number: 11127185Abstract: An animation system is provided for generating an animation control rig configured to manipulate a skeleton of an animated object. A partition separation process enables software changes to be inserted into uncompiled computer code associated with the animation control rig. Analysis of the uncompiled computer code is implemented relative to a performance metric. Based on the analysis in view of the performance matric, one or more partitions are determined in the uncompiled computer code to partition the code into separate code blocks. The uncompiled code is separated at the partition and updated with the software change. The updated code is compiled to generate the animation control rig.Type: GrantFiled: March 19, 2021Date of Patent: September 21, 2021Assignee: WETA DIGITAL LIMITEDInventors: Thomas Stevenson, Edward Sun
-
Publication number: 20210287424Abstract: A main video sequence of a live action scene is captured along with ancillary device data to provide corresponding volumetric information about the scene. The volumetric data can then be used to visually remove or replace objects in the main video sequence. A removed object is replaced by the view that would have been captured by the main video sequence had the removed object not been present in the live action scene at the time of capturing.Type: ApplicationFiled: May 21, 2021Publication date: September 16, 2021Applicant: Weta Digital LimitedInventors: Kimball D. Thurston, III, Peter M. Hillman
-
Publication number: 20210274092Abstract: An imagery processing system determines alternative pixel color values for pixels of captured imagery where the alternative pixel color values are obtained from alternative sources. A main imagery capture device, such as a camera, captures main imagery such as still images and/or video sequences, of a live action scene. Alternative devices capture imagery of the live action scene, in some spectra and form, and that alternative imagery is processed to provide alternatives for pixel ranges from the main imagery.Type: ApplicationFiled: September 11, 2020Publication date: September 2, 2021Applicant: Weta Digital LimitedInventors: Kimball D. Thurston, III, Peter M. Hillman
-
Publication number: 20210272307Abstract: Installation of active markers light components to a wearable article of an object in a performance capture system is provided. The active marker light components are inserted into a receptacle to position the active marker light components onto the wearable article. The active marker light component and a strand to which the active marker light component is coupled, are inserted into the receptacle. The strand may be engaged into a channel in the receptacle. A top portion of the receptacle is introduced through an opening in the wearable article from the inside of the wearable article to the outside and a base portion catches an inner side of the wearable article. A fitting may be rotatably secured around the top portion at the outer side of the wearable article, to press against the base portion through the wearable article. The active markers light components may be removed from the wearable article by rotating the fitting in a counter-direction and guiding the top portion back through the opening.Type: ApplicationFiled: November 30, 2020Publication date: September 2, 2021Applicant: Weta Digital LimitedInventors: Dejan Momcilovic, Jake Botting
-
Publication number: 20210274094Abstract: An imagery processing system determines alternative pixel color values for pixels of captured imagery where the alternative pixel color values are obtained from alternative sources. A main imagery capture device, such as a camera, captures main imagery such as still images and/or video sequences, of a live action scene. Alternative devices capture imagery of the live action scene, in some spectra and form, and that alternative imagery is processed to provide user-selectable alternatives for pixel ranges from the main imagery.Type: ApplicationFiled: January 14, 2021Publication date: September 2, 2021Applicant: Weta Digital LimitedInventors: Kimball D. Thurston, III, Peter M. Hillman
-
Publication number: 20210270923Abstract: An active marker apparatus is provided for securely affixing active markers to a wearable article of an object in a performance capture system. The active marker light components coupled to a strand, are inserted into a receptacle to position the active marker light components onto the wearable article. A gap is provided in a chamber between the active marker light component and an interior surface of a protrusion portion of the receptacle. At least a section of the protrusion portion permits light emitted from the active marker light component into the chamber, to diffuse in a manner that allows the light to be easily detected by a camera in the live action scene. Each active marker light component is locked into place in a respective receptacle by one or more channels that receive the strand. A cap fitting may further assist in securing the receptacle to the wearable article and optionally aid in visual detection of the receptacle.Type: ApplicationFiled: November 30, 2020Publication date: September 2, 2021Applicant: Weta Digital LimitedInventors: Dejan Momcilovic, Jake Botting
-
Publication number: 20210274093Abstract: An imagery processing system obtains capture inputs from capture devices that might have capture parameters and characteristics that differ from those of a main imagery capture device. By normalizing outputs of those capture devices, potentially arbitrary capture devices could be used for reconstructing portions of a scene captured by the main imagery capture device when reconstructing a plate of the scene to replace an object in the scene with what the object obscured in the scene. Reconstruction could be of one main image, a stereo pair of images, or some number, N, of images where N>2.Type: ApplicationFiled: January 11, 2021Publication date: September 2, 2021Applicant: Weta Digital LimitedInventors: Kimball D. Thurston, III, Peter M. Hillman
-
Publication number: 20210272334Abstract: Compositing is provided in which visual elements from different sources, including live action objects and computer graphic (CG) merged in a constant feed. Representative output images are produced during a live action shoot. The compositing system uses supplementary data, such as depth data of the live action objects for integration with CG items and light marker detection data for device calibration and performance capture. Varying capture times (e.g., exposure times) and processing times are tracked to align with corresponding incoming images and data.Type: ApplicationFiled: February 17, 2021Publication date: September 2, 2021Applicant: Weta Digital LimitedInventors: Dejan Momcilvic, Erik B. Edlund, Tobias B. Schmidt
-
Publication number: 20210267493Abstract: The present description relates to light patterns used in a live action scene of a visual production to encode information associated with objects in the scene, such as movement and position of the objects. A data capture system is enabled to differentiate between various groups of active markers attached to the objects in the scene. The groups of active markers emit light of a particular wavelength in strobing patterns predefined for the various groups. In some implementations, the groups are instructed to emit its assigned signature pattern of light through a signal controller transmitting an initial key signature predefined for the group, followed by pattern signals to a control unit. The data representing the pattern is captured in illuminated and blank frames. Frames showing the light pattern are analyzed to extract information about the groups of active markers, such as distinguishing the groups and identifying the objects to which they are attached.Type: ApplicationFiled: January 29, 2021Publication date: September 2, 2021Applicant: Weta Digital LimitedInventors: Dejan Momcilovic, Jake Botting
-
Publication number: 20210274091Abstract: An imagery processing system obtains capture inputs from capture devices that might have capture parameters and characteristics that differ from those of a main imagery capture device. By normalizing outputs of those capture devices, potentially arbitrary capture devices could be used for reconstructing portions of a scene captured by the main imagery capture device when reconstructing a plate of the scene to replace an object in the scene with what the object obscured in the scene. Reconstruction could be of one main image, a stereo pair of images, or some number, N, of images where N>2.Type: ApplicationFiled: September 11, 2020Publication date: September 2, 2021Applicant: Weta Digital LimitedInventors: Kimball D. Thurston, III, Peter M. Hillman
-
Publication number: 20210272351Abstract: An imagery processing system determines pixel color values for pixels of captured imagery from volumetric data, providing alternative pixel color values. A main imagery capture device, such as a camera, captures main imagery such as still images and/or video sequences, of a live action scene. Alternative devices capture imagery of the live action scene, in some spectra and form, and capture information related to pixel color values for multiple depths of a scene, which can be processed to provide reconstruction.Type: ApplicationFiled: September 11, 2020Publication date: September 2, 2021Applicant: Weta Digital LimitedInventors: Kimball D. Thurston, III, Peter M. Hillman
-
Publication number: 20210270924Abstract: A sealed active marker apparatus of a performance capture system is described to provide protective housing for active marker light components coupled to a strand and attached via a receptacle, to an object, such as via a wearable article, in a live action scene. The receptacle includes a protrusion portion that permits at least one particular wavelength range of light emitted from the enclosed active marker light component, to diffuse in a manner that enables easy detection by a sensor device. A base portion interlocks with a bottom plate of the receptacle to secure the strand within one or more channels. A sealant material coating portions of the apparatus promotes an insulating environment for the active marker light component.Type: ApplicationFiled: November 30, 2020Publication date: September 2, 2021Applicant: Weta Digital LimitedInventors: Dejan Momcilovic, Jake Botting
-
Publication number: 20210241474Abstract: Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.Type: ApplicationFiled: December 23, 2020Publication date: August 5, 2021Applicant: Weta Digital LimitedInventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
-
Publication number: 20210241422Abstract: An image processor eliminates a character or object from a sequence of frames and then merges the resulting images with those of nearby frames, both preceding and succeeding, to synthesize the background of the sequence of frames.Type: ApplicationFiled: September 11, 2020Publication date: August 5, 2021Applicant: Weta Digital LimitedInventors: Sebastian R. F. Burke, III, Pravin P. Bhat
-
Publication number: 20210241486Abstract: Embodiments provide multi-angle screen coverage analysis. In some embodiments, a system obtains at least one image, where the at least one image is a computer graphics generated image, and where the at least one image comprises at least one target object. The system determines screen coverage information for the at least one target object, where the screen coverage information is based on a portion of the screen that is covered by the at least one target object. The system determines depth information for the at least one target object. The system determines an asset detail level for the at least one target object based on the screen coverage information and the depth information. The system then stores the asset detail level in a database.Type: ApplicationFiled: February 25, 2021Publication date: August 5, 2021Applicant: Weta Digital LimitedInventor: Kenneth Gimpelson
-
Publication number: 20210241473Abstract: Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.Type: ApplicationFiled: October 27, 2020Publication date: August 5, 2021Applicant: Weta Digital LimitedInventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
-
Publication number: 20210241485Abstract: Embodiments provide multi-angle screen coverage analysis. In some embodiments, a system obtains a computer graphics generated image having at least one target object for analysis. The system determines screen coverage information and depth information for the at least one target object. The system then determines an asset detail level for the at least one target object based on the screen coverage information and the depth information. The system then stores the asset detail level in a database, and makes the asset detail level available to users.Type: ApplicationFiled: October 15, 2020Publication date: August 5, 2021Applicant: Weta Digital LimitedInventor: Kenneth Gimpelson
-
Patent number: 11074738Abstract: In an embodiment, an animator is provided with an indication when a model's component such as a joint or limb is being moved or twisted in a way that would be unnatural and cause unusual stress on the model component. For example, as a shoulder joint is stressed by moving an arm in an extreme position a yellow bar or coloring of the shoulder, arm or other component can grow increasingly bright and shift to red just before a breaking point is reached. An animator can choose to go past the breaking point and the breaking can be modeled and incorporated into the animation.Type: GrantFiled: January 28, 2021Date of Patent: July 27, 2021Assignee: WETA DIGITAL LIMITEDInventors: Thomas Stevenson, Andrew R. Phillips, Edward Sun
-
Patent number: 11055519Abstract: The present description relates to light patterns used in a live action scene of a visual production to encode information associated with objects in the scene, such as movement and position of the objects. A data capture system includes active markers that emit light of a particular wavelength in predefined strobing patterns. In some implementations, the active markers are instructed to emit an assigned signature pattern of light through a signal controller sending signals to a control unit. Various components are synchronized such that pulsing of light corresponds to time slices and particular frames captured by the performance capture system. The data representing the pattern is embedded in illuminated and blank frames. Frames showing the light pattern are analyzed to extract information about the active markers, such as identification of the active markers and objects to which they are attached.Type: GrantFiled: January 29, 2021Date of Patent: July 6, 2021Assignee: WETA DIGITAL LIMITEDInventors: Dejan Momcilovic, Jake Botting
-
Patent number: 11055892Abstract: An animation system wherein a machine learning model is adopted to generate animated facial actions based on parameters obtained from a live actor. Specifically, the anatomical structure such as a facial muscle topology and a skull surface that are specific to the live actor may be used. A skull surface that is specific to a live actor based on facial scans of the live actor and generic tissue depth data. For example, the facial scans of the live actor may provide a skin surface topology of the live actor, based on which the skull surface underneath the skin surface can be derived by “offsetting” the skin surface with corresponding soft tissue depth at different sampled points on the skin surface.Type: GrantFiled: January 20, 2021Date of Patent: July 6, 2021Assignee: Weta Digital LimitedInventor: Byung Kuk Choi