Patents Assigned to Weta Digital Limited
  • Patent number: 11328437
    Abstract: Methods and systems for defocusing a rendered computer-generated image are presented. Pixel values for a pixel array are determined from a scene description. A blur amount for each pixel is determined based on a lens function representing a lens shape and/or effect. A blur amount and blur transparency value are determined for the pixel based on the lens function and pixel depth. A convolution range comprising pixels adjacent to the pixel is determined based on the blur amount. A blend color value is determined for the pixel based on the color value of the pixel, color values of pixels in the convolution range, and the blur transparency value. The blend color value is scaled based on the blend color value and a modified pixel color value is determined from scaled blend color values.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: May 10, 2022
    Assignee: Weta Digital Limited
    Inventor: Peter Hillman
  • Patent number: 11308644
    Abstract: The present description relates relate to recalibration of a sensor device for performance capture, by detecting a miscalibration problem with sensor device and assessment of the problem. A recalibration system includes sensor devices initially calibrated at a recording site. A recording site change occurs and afterwards, a failure to match virtual rays projected from one sensor device with virtual rays projected from the active marker is detected. In response to determining the failure, the active marker is signaled to emit a unique display of light. The failure of the rays to match is assessed based on whether sensor devices capture the unique display of light. Three-dimensional (3-D) coordinates of an active marker is reconstructed from marker data of the calibrated sensor devices. A problematic sensor device is recalibrated based on the assessment, using the 3-D coordinates of the active marker from marker data of the remaining calibrated sensor devices, without stopping the recording.
    Type: Grant
    Filed: July 16, 2021
    Date of Patent: April 19, 2022
    Assignee: WETA DIGITAL LIMITED
    Inventors: Dejan Momcilovic, Jake Botting
  • Patent number: 11302052
    Abstract: An aspect provides a computer-implemented method for constructing evaluation logic associated with an animation software package. The method comprises receiving at least one software module, the at least one software module including at least one evaluator; writing the at least one software module to at least one executable code object; and maintaining data for the at least one software module in a contiguous block of memory for use by the software module.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: April 12, 2022
    Assignee: WETA DIGITAL LIMITED
    Inventors: Niall J. Lenihan, Richard Chi Lei, Sander van der Steen
  • Publication number: 20220103750
    Abstract: An imagery processing system determines alternative pixel color values for pixels of captured imagery where the alternative pixel color values are obtained from alternative sources. A main imagery capture device, such as a camera, captures main imagery such as still images and/or video sequences, of a live action scene. Alternative devices capture imagery of the live action scene, in some spectra and form, and that alternative imagery is processed to provide user-selectable alternatives for pixel ranges from the main imagery.
    Type: Application
    Filed: December 9, 2021
    Publication date: March 31, 2022
    Applicant: Weta Digital Limited
    Inventors: Kimball D. Thurston, III, Peter M. Hillman
  • Patent number: 11288496
    Abstract: The present description relates to light patterns used in a live action scene of a visual production to encode information associated with objects in the scene, such as movement and position of the objects. A data capture system includes active markers that emit light of a particular wavelength in predefined strobing patterns. In some implementations, the active markers are instructed to emit an assigned signature pattern of light through a signal controller sending signals to a control unit. Various components are synchronized such that pulsing of light corresponds to time slices and particular frames captured by the performance capture system. The data representing the pattern is embedded in illuminated and blank frames. Frames showing the light pattern are analyzed to extract information about the active markers, such as identification of the active markers and objects to which they are attached.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: March 29, 2022
    Assignee: WETA DIGITAL LIMITED
    Inventors: Dejan Momcilovic, Jake Botting
  • Publication number: 20220092301
    Abstract: The present description relates to light patterns used in a live action scene of a visual production to encode information associated with objects in the scene, such as movement and position of the objects. A data capture system includes active markers that emit light of a particular wavelength in predefined strobing patterns. In some implementations, the active markers are instructed to emit an assigned signature pattern of light through a signal controller sending signals to a control unit. Various components are synchronized such that pulsing of light corresponds to time slices and particular frames captured by the performance capture system. The data representing the pattern is embedded in illuminated and blank frames. Frames showing the light pattern are analyzed to extract information about the active markers, such as identification of the active markers and objects to which they are attached.
    Type: Application
    Filed: December 2, 2021
    Publication date: March 24, 2022
    Applicant: Weta Digital Limited
    Inventors: Dejan Momcilovic, Jake Botting
  • Patent number: 11281477
    Abstract: An example method facilitates adjusting or enhancing performance of a process of as graphics program or animation software package, and includes providing a first User Interface (UI) control for allowing user assignment of or selection of one or more processor types, e.g., Graphics Processing Unit (GPU) or Central Processing Unit (CPU), and/or associated memory types, e.g., GPU memory and/or CPU memory, to one or more computing resources, such as variables and/or associated functions or evaluators. A drop-down menu or other control may be provided in a first UI to allow for user specification of or assignment of one or more computing resources, e.g., CPU or GPU processors and/or memory to one or more variables, data structures, associated functions or other executable code. In a specific implementation, one UI control facilitates user specification of one or more evaluators of a plugin, wherein the one or more evaluators are usable by a host application of the plugin.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: March 22, 2022
    Assignee: WETA DIGITAL LIMITED
    Inventors: Niall J. Lenihan, Richard Chi Lei, Sander van der Steen
  • Patent number: 11282233
    Abstract: Embodiments facilitate the calibration of cameras in a live action scene. In some embodiments, a system receives images of the live action scene from a plurality of cameras. The system further receives reference point data generated from a performance capture system, where the reference point data is based on at least three reference points, where the at least three reference points are positioned within the live action scene, and where distances between the at least three reference points are predetermined. The system further determines a location and orientation of each camera based on the reference point data.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: March 22, 2022
    Assignee: WETA DIGITAL LIMITED
    Inventors: Dejan Momcilovic, Jake Botting
  • Publication number: 20220086308
    Abstract: Implementations provide a wearable article for a performance capture system. In some implementations, a wearable article includes one or more regions, where the one or more regions are configured to be worn on at least a portion of a body of a user, and where at least one of the one or more regions are configured to hold performance capture equipment in predetermined positions. In some implementations, the wearable article also includes a plurality of mounting mechanisms coupled to the one or more regions for mounting reference markers to be used for position determination. In some implementations, the wearable article also includes a plurality of fastening mechanisms coupled to the one or more regions for fastening devices and accessories for controlling the reference markers.
    Type: Application
    Filed: November 24, 2021
    Publication date: March 17, 2022
    Applicant: Weta Digital Limited
    Inventors: Dejan Momcilovic, Jake Botting
  • Publication number: 20220076450
    Abstract: Embodiments facilitate the calibration of cameras in a live action scene. In some embodiments, a system receives images of the live action scene from a plurality of cameras. The system further receives reference point data generated from a performance capture system, where the reference point data is based on at least three reference points, where the at least three reference points are positioned within the live action scene, and where distances between the at least three reference points are predetermined. The system further determines a location and orientation of each camera based on the reference point data.
    Type: Application
    Filed: February 25, 2021
    Publication date: March 10, 2022
    Applicant: Weta Digital Limited
    Inventors: Dejan Momcilovic, Jake Botting
  • Publication number: 20220076452
    Abstract: Embodiments facilitate the calibration of cameras in a live action scene. In some embodiments, a system receives images of the live action scene from a plurality of cameras. The system further receives reference point data generated from a performance capture system, where the reference point data is based on at least three reference points, where the at least three reference points are attached to a linear form, and where distances between the at least three reference points are predetermined. The system further locates the at least three reference points in one or more images of the images. The system further computes one or more ratios of the distances between each adjacent pair of reference points of the at least three reference points in the one or more images. The system further determines a location and orientation of each camera based on the reference point data.
    Type: Application
    Filed: February 25, 2021
    Publication date: March 10, 2022
    Applicant: Weta Digital Limited
    Inventors: Dejan Momcilovic, Jake Botting
  • Publication number: 20220076451
    Abstract: Embodiments facilitate the calibration of cameras in a live action scene. In some embodiments, a system receives images of the live action scene from a plurality of cameras. The system further receives reference point data generated from a performance capture system, where the reference point data is based on a plurality of reference points coupled to a plurality of extensions coupled to a base, where the plurality of reference points are in a non-linear arrangement, where distances between references points are predetermined. The system further computes reference point data generated from a performance capture system and based on the distances. The system further computes a location and orientation of each camera in the live action scene based on the reference point data.
    Type: Application
    Filed: February 25, 2021
    Publication date: March 10, 2022
    Applicant: Weta Digital Limited
    Inventors: Dejan Momcilovic, Jake Botting
  • Patent number: 11270490
    Abstract: A method for generating one or more visual representations of an object colliding with an interface between a simulated fluid and a material. The method includes obtaining shape and movement data of a bulk fluid and an object, identifying an interface where the bulk fluid covers a portion of the object, generating an emitted fluid at the interface, generating shape and movement data of the emitted fluid interacting with the object.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: March 8, 2022
    Assignee: Weta Digital Limited
    Inventor: Alexey Stomakhin
  • Publication number: 20220067970
    Abstract: Embodiments facilitate the calibration of cameras in a live action scene using fixed cameras and drones. In some embodiments, a method configures a plurality of reference cameras to observe at least three known reference points located in the live action scene and to observe one or more reference points associated with one or more moving cameras having unconstrained motion. The method further configures the one or more moving cameras to observe one or more moving objects in the live action scene. The method further receives reference point data in association with one or more reference cameras of the plurality of reference cameras, where the reference point data is based on the at least three known reference points and the one or more reference points associated with the one or more moving cameras.
    Type: Application
    Filed: December 11, 2020
    Publication date: March 3, 2022
    Applicant: Weta Digital Limited
    Inventors: Dejan Momcilovic, Jake Botting
  • Publication number: 20220067968
    Abstract: Embodiments facilitate the calibration of cameras in a live action scene using drones with multiple cameras. In some embodiments, a method configures a plurality of reference cameras to observe at least one portion of the live action scene. The method further configures at least one first camera coupled to an apparatus to observe one or more moving objects in the live action scene. The method further configures at least one second camera coupled to the at least one apparatus to observe at least three known reference points located in the live action scene. The method further receives reference point data in association with the at least one second camera, where the reference point data is based on the at least three known reference points. The method further computes a location and an orientation of the at least one first camera and the at least one second camera based on the reference point data.
    Type: Application
    Filed: December 11, 2020
    Publication date: March 3, 2022
    Applicant: Weta Digital Limited
    Inventors: Dejan Momcilovich, Jake Botting
  • Publication number: 20220067972
    Abstract: The present description relates relate to recalibration of a sensor device for performance capture, by detecting a miscalibration problem with sensor device and assessment of the problem. A recalibration system includes sensor devices initially calibrated at a recording site. A recording site change occurs and afterwards, a failure to match virtual rays projected from one sensor device with virtual rays projected from the active marker is detected. In response to determining the failure, the active marker is signaled to emit a unique display of light. The failure of the rays to match is assessed based on whether sensor devices capture the unique display of light. Three-dimensional (3-D) coordinates of an active marker is reconstructed from marker data of the calibrated sensor devices. A problematic sensor device is recalibrated based on the assessment, using the 3-D coordinates of the active marker from marker data of the remaining calibrated sensor devices, without stopping the recording.
    Type: Application
    Filed: July 16, 2021
    Publication date: March 3, 2022
    Applicant: Weta Digital Limited
    Inventors: Dejan Momcilovic, Jake Botting
  • Publication number: 20220067969
    Abstract: Embodiments facilitate the calibration of cameras in a live action scene using drones. In some embodiments, a method configures a plurality of reference cameras to observe at least one portion of the live action scene. The method further configures one or more moving cameras having unconstrained motion to observe one or more moving objects in the live action scene and to observe at least three known reference points associated with the plurality of reference cameras. The method further receives reference point data in association with the one or more moving cameras, where the reference point data is based on the at least three known reference points. The method further computes a location and an orientation of each moving camera of the one or more moving cameras based on one or more of the reference point data and one or more locations of one or more reference cameras of the plurality of reference cameras.
    Type: Application
    Filed: December 11, 2020
    Publication date: March 3, 2022
    Applicant: Weta Digital Limited
    Inventors: Dejan Momcilovic, Jake Botting
  • Publication number: 20220053108
    Abstract: Embodiments provide a wearable article for a performance capture system. In some embodiments, a wearable article includes one or more regions, where the one or more regions are configured to be worn on at least a portion of a body of a user, where the one or more regions have a first pliability and a second pliability, where the first pliability and the second pliability are different pliabilities, and where at least one of the one or more regions are configured to hold devices in predetermined positions while maintaining shape and respective pliability. In some embodiments, the wearable article also includes a plurality of mounting mechanisms coupled to the one or more regions for mounting one or more reference markers to be used for position determination.
    Type: Application
    Filed: April 30, 2021
    Publication date: February 17, 2022
    Applicant: Weta Digital Limited
    Inventors: Dejan Momcilovic, Jake Botting
  • Publication number: 20220050497
    Abstract: Embodiments provide a wearable article with channels for a performance capture system. In some embodiments, a wearable article includes one or more regions of the wearable article configured to be worn on at least a portion of a body of a user. In some embodiments, the wearable article also includes at least one of the one or more regions comprising at least one base layer and at least one secondary layer configured to form at least one connection passage between the at least one base layer and the at least one secondary layer. In some embodiments, the at least one connection passage is configured to provide access for flexible cable connections between at least one reference marker and one or more other reference markers or a control unit. In some embodiments, the at least one connection passage is configured to allow movement of a flexible cable within the connection passage in response to movement of the user.
    Type: Application
    Filed: April 30, 2021
    Publication date: February 17, 2022
    Applicant: Weta Digital Limited
    Inventors: Dejan Momcilovic, Jake Botting
  • Patent number: 11250609
    Abstract: A realistic feather growth may be represented between two surface manifolds in a modeling system. To perform the feather growth, a feather groom for a plurality of feathers between an inner shell of a creature and an outer shell of the creature is received. An inner manifold for the inner shell and an outer manifold for the outer shell is determined with a plurality of follicle points and a plurality of tip points. A first surface contour definition for the inner manifold and a second surface contour definition for the outer manifold is determined and used to determine a volumetric vector field between the inner manifold and the outer manifold. Thereafter, the plurality of feathers is generated between the inner manifold and the outer manifold using the follicle points, the tip points, and the volumetric vector fields.
    Type: Grant
    Filed: May 14, 2021
    Date of Patent: February 15, 2022
    Assignee: Weta Digital Limited
    Inventor: Christoph Sprenger