Patents Assigned to UNITY TECHNOLOGIES SF
  • Patent number: 11677928
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall Image quality levels for display wall portions of the display wall in the image data is determined, and pixels associated with the display wall in the image data are adjusted to the image quality levels.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: June 13, 2023
    Assignee: Unity Technologies SF
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Patent number: 11657563
    Abstract: A computer-implemented method for generating a mask for a light source in a virtual scene includes determining a bounding box for the scene based on a frustum of a virtual camera and generating a path-traced image of the scene within the bounding box. Light paths emitted by the camera and exiting at the light source are stored, and objects poorly sampled by the light source are removed from the scene. An initial mask for the light source is generated from the density of light paths exiting at that position on the light source. The initial mask is refined by averaging in the light path density at each point on the light source for subsequent images.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: May 23, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventor: Jir̆í Vorba
  • Patent number: 11636621
    Abstract: Embodiments facilitate the calibration of cameras in a live action scene using fixed cameras and drones. In some embodiments, a method configures a plurality of reference cameras to observe at least three known reference points located in the live action scene and to observe one or more reference points associated with one or more moving cameras having unconstrained motion. The method further configures the one or more moving cameras to observe one or more moving objects in the live action scene. The method further receives reference point data in association with one or more reference cameras of the plurality of reference cameras, where the reference point data is based on the at least three known reference points and the one or more reference points associated with the one or more moving cameras.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: April 25, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Dejan Momcilovic, Jake Botting
  • Patent number: 11627297
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, stereoscopic image data of the live action scene is received, and display wall metadata of the precursor image is determined. Further, a first portion of the stereoscopic image data comprising the stage element in the live action scene is determined based on the stereoscopic image data and the display wall metadata. A second portion of the stereoscopic image data comprising the display wall in the live action scene with the display wall displaying the precursor image is also determined. Thereafter, an image matte for the stereoscopic image data is generated based on the first portion and the second portion.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: April 11, 2023
    Assignee: Unity Technologies SF
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Patent number: 11625848
    Abstract: Embodiments provide multi-angle screen coverage analysis. In some embodiments, a system obtains a computer graphics generated image having at least one target object for analysis. The system determines screen coverage information and depth information for the at least one target object. The system then determines an asset detail level for the at least one target object based on the screen coverage information and the depth information. The system then stores the asset detail level in a database, and makes the asset detail level available to users.
    Type: Grant
    Filed: October 15, 2020
    Date of Patent: April 11, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventor: Kenneth Gimpelson
  • Patent number: 11625882
    Abstract: A method for generating one or more visual representations of a porous media submerged in a fluid is provided. The method can be performed using a computing device operated by a computer user or artist. The method includes defining a field comprising fluid parameter values for the fluid, the fluid parameter values comprising fluid velocity values and pore pressures. The method includes generating a plurality of particles that model a plurality of objects of the porous media, the plurality of objects being independently movable with respect to one another, determining values of motion parameters based at least in part on the field when the plurality of particles are submerged in the fluid, buoyancy and drag forces being used to determine relative motion of the plurality of particles and the fluid, and generating the one or more visual representations of the plurality of objects submerged in the fluid based on the values of the motion parameters.
    Type: Grant
    Filed: November 10, 2021
    Date of Patent: April 11, 2023
    Assignee: Unity Technologies SF
    Inventors: Alexey Stomakhin, Joel Wretborn, Gilles Daviet
  • Patent number: 11625900
    Abstract: A first set of instance layer data that describes a scene to be represented by one or more computer-generated images is obtained. The set of instance layer data specifies a plurality of object instances within the scene, with each instance of the plurality of object instances corresponding to a position that an instance of a digital object is to appear in the scene. The set of instance layer data further specifies a first set of characteristics of the plurality of object instances that includes the position. A second set of instance layer data that indicates changes to be made to the scene described by the first set of instance layer data is obtained. A third set of instance layer data is generated to include the changes to the scene by overlaying the second set of instance layer data onto the first set of instance layer data. The scene is caused to be rendered by providing the third set of instance layer data to an instancing service.
    Type: Grant
    Filed: October 22, 2020
    Date of Patent: April 11, 2023
    Assignee: Unity Technologies SF
    Inventors: Nick S. Shore, Oliver M. Castle, Timothy E. Murphy
  • Patent number: 11620765
    Abstract: Embodiments provide for automated detection of a calibration object within a recorded image. In some embodiments, a system receives an original image from a camera, wherein the original image includes at least a portion of a calibration chart. The system further derives a working image from the original image. The system further determines regions in the working image, wherein each region comprises a group of pixels having values within a predetermined criterion. The system further analyzes two or more of the regions to identify a candidate calibration chart in the working image. The system further identifies at least one region within the candidate calibration chart as a patch. The system further predicts a location of one or more additional patches based on at least the identified patch.
    Type: Grant
    Filed: October 7, 2020
    Date of Patent: April 4, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventor: Peter Hillman
  • Patent number: 11615755
    Abstract: The disclosed system modifies luminance of a display associated with a selective screen. The display provides a camera with an image having resolution higher than the resolution of the display by presenting multiple images while the selective screen enables light from different portions of the multiple images to reach the camera. The resulting luminance of the recorded image is lower than a combination of luminance values of the multiple images. The processor obtains a criterion indicating a property of the input image where image detail is unnecessary. The processor detects a region of the input image satisfying the criterion, and determines a region of the selective screen corresponding to the region of the input image. The processor increases the luminance of the display by disabling the region of the selective screen corresponding to the region of the input image.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: March 28, 2023
    Assignee: Unity Technologies SF
    Inventors: Joseph W. Marks, Luca Fascione, Kimball D. Thurston, III, Millie Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman, Jonathan S. Swartz, Carter Bart Sullivan
  • Patent number: 11605171
    Abstract: A compositor generates a representation of a gradient image corresponding to a replaceable background object in a scene. The representation can be generated from user set of reference points in the scene.
    Type: Grant
    Filed: March 5, 2021
    Date of Patent: March 14, 2023
    Assignee: Unity Technologies SF
    Inventor: Peter Hillman
  • Patent number: 11600022
    Abstract: Embodiments facilitate the calibration of cameras in a live action scene using drones. In some embodiments, a method configures a plurality of reference cameras to observe at least one portion of the live action scene. The method further configures one or more moving cameras having unconstrained motion to observe one or more moving objects in the live action scene and to observe at least three known reference points associated with the plurality of reference cameras. The method further receives reference point data in association with the one or more moving cameras, where the reference point data is based on the at least three known reference points. The method further computes a location and an orientation of each moving camera of the one or more moving cameras based on one or more of the reference point data and one or more locations of one or more reference cameras of the plurality of reference cameras.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: March 7, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Dejan Momcilovic, Jake Botting
  • Patent number: 11600041
    Abstract: The system obtains an indication of a shape of a cross-section of an elongated shape, and an orientation of the shape. Based on the shape of the cross-section of the elongated shape and the orientation of the shape, the system creates a nonuniform distribution of random numbers mapping uniformly distributed input values to multiple points on the surface of the elongated shape. The system provides an input value randomly selected from a uniform distribution of random numbers to the nonuniform distribution of random numbers to obtain a point among the multiple sample points on the surface of the elongated shape. The system applies a function to the input value to obtain an indication of a normal associated with the sample point among the multiple sample points. Finally, the system computes an illumination of the elongated shape using the normal.
    Type: Grant
    Filed: December 8, 2021
    Date of Patent: March 7, 2023
    Assignee: Unity Technologies SF
    Inventor: Andrea Weidlich
  • Patent number: 11593584
    Abstract: A computer-implemented method for processing a set of virtual fibers into a set of clusters of virtual fibers, usable for manipulation on a cluster basis in a computer graphics generation system, may include determining aspects for virtual fibers in the set of virtual fibers, determining similarity scores between the virtual fibers based on their aspects, and determining an initial cluster comprising the virtual fibers of the set of virtual fibers. The method may further include instantiating a cluster list in at least one memory, adding the initial cluster to the cluster list, partitioning the initial cluster into a first subsequent cluster and a second subsequent cluster based on similarity scores among fibers in the initial cluster, adding the first subsequent cluster and the second subsequent cluster to the cluster list, and testing whether a number of clusters in the cluster list is below a predetermined threshold.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: February 28, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventor: Olivier Gourmel
  • Patent number: 11593993
    Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values to add or modify an image effect or a visual effect are determined, and the image data is adjusted using the pixel display values and the image matte.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: February 28, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
  • Patent number: 11587278
    Abstract: Embodiments described herein provide an approach of animating a character face of an artificial character based on facial poses performed by a live actor. Geometric characteristics of the facial surface corresponding to each facial pose performed the live actor may be learnt by a machine learning system, which in turn build a mesh of a facial rig of an array of controllable elements applicable on a character face of an artificial character.
    Type: Grant
    Filed: August 16, 2021
    Date of Patent: February 21, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Wan-duo Kurt Ma, Muhammad Ghifary
  • Patent number: 11562522
    Abstract: An aspect provides a computer-implemented method for compiling software code. The method comprises: receiving software code to compile; receiving a set of parameters associated with settings and software employed to compile the software code; forming a first hash of the set of parameters to establish a unique identification of the set of parameters used to compile the software code; and associating the first hash with the compiled code. A further aspect provides a computer-implemented method of checking compatibility of compiled software code.
    Type: Grant
    Filed: April 15, 2021
    Date of Patent: January 24, 2023
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Florian Deconinck, Sander van der Steen, Richard Chi Lei, Adam Christensen, Niall J. Lenihan
  • Patent number: 11537162
    Abstract: Embodiments provide a wearable article for a performance capture system. In some embodiments, a wearable article includes one or more regions, where the one or more regions are configured to be worn on at least a portion of a body of a user, where the one or more regions have a first pliability and a second pliability, where the first pliability and the second pliability are different pliabilities, and where at least one of the one or more regions are configured to hold devices in predetermined positions while maintaining shape and respective pliability. In some embodiments, the wearable article also includes a plurality of mounting mechanisms coupled to the one or more regions for mounting one or more reference markers to be used for position determination.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: December 27, 2022
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Dejan Momcilovic, Jake Botting
  • Patent number: 11514654
    Abstract: Methods and systems are presented for determining a virtual focus model for a camera apparatus, the camera apparatus comprising one or more image capture elements and one or more optics device through which light in an optical path passes from a stage environment to at least one of the one or more image capture elements, the stage environment including virtual scene display for displaying a virtual scene.
    Type: Grant
    Filed: December 9, 2021
    Date of Patent: November 29, 2022
    Assignee: Unity Technologies SF
    Inventors: Kimball D. Thurston, III, Joseph W. Marks, Luca Fascione, Millicent Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman
  • Patent number: 11508081
    Abstract: A sealed active marker apparatus of a performance capture system is described to provide protective housing for active marker light components coupled to a strand and attached via a receptacle, to an object, such as via a wearable article, in a live action scene. The receptacle includes a protrusion portion that permits at least one particular wavelength range of light emitted from the enclosed active marker light component, to diffuse in a manner that enables easy detection by a sensor device. A base portion interlocks with a bottom plate of the receptacle to secure the strand within one or more channels. A sealant material coating portions of the apparatus promotes an insulating environment for the active marker light component.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: November 22, 2022
    Assignee: UNITY TECHNOLOGIES SF
    Inventors: Dejan Momcilovic, Jake Botting
  • Patent number: 11508108
    Abstract: A method for generating one or more visual representations of a porous media submerged in a fluid is provided. The method can be performed using a computing device operated by a computer user or artist. The method includes defining a field comprising fluid parameter values for the fluid, the fluid parameter values comprising fluid velocity values and pore pressures. The method includes generating a plurality of particles that model a plurality of objects of the porous media, the plurality of objects being independently movable with respect to one another, determining values of motion parameters based at least in part on the field when the plurality of particles are submerged in the fluid, buoyancy and drag forces being used to determine relative motion of the plurality of particles and the fluid, and generating the one or more visual representations of the plurality of objects submerged in the fluid based on the values of the motion parameters.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: November 22, 2022
    Assignee: Unity Technologies SF
    Inventors: Alexey Stomakhin, Joel Wretborn, Gilles Daviet