Patents Assigned to UNITY TECHNOLOGIES SF
-
Patent number: 11677928Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall Image quality levels for display wall portions of the display wall in the image data is determined, and pixels associated with the display wall in the image data are adjusted to the image quality levels.Type: GrantFiled: December 10, 2021Date of Patent: June 13, 2023Assignee: Unity Technologies SFInventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
-
Patent number: 11657563Abstract: A computer-implemented method for generating a mask for a light source in a virtual scene includes determining a bounding box for the scene based on a frustum of a virtual camera and generating a path-traced image of the scene within the bounding box. Light paths emitted by the camera and exiting at the light source are stored, and objects poorly sampled by the light source are removed from the scene. An initial mask for the light source is generated from the density of light paths exiting at that position on the light source. The initial mask is refined by averaging in the light path density at each point on the light source for subsequent images.Type: GrantFiled: June 29, 2021Date of Patent: May 23, 2023Assignee: UNITY TECHNOLOGIES SFInventor: Jir̆í Vorba
-
Patent number: 11636621Abstract: Embodiments facilitate the calibration of cameras in a live action scene using fixed cameras and drones. In some embodiments, a method configures a plurality of reference cameras to observe at least three known reference points located in the live action scene and to observe one or more reference points associated with one or more moving cameras having unconstrained motion. The method further configures the one or more moving cameras to observe one or more moving objects in the live action scene. The method further receives reference point data in association with one or more reference cameras of the plurality of reference cameras, where the reference point data is based on the at least three known reference points and the one or more reference points associated with the one or more moving cameras.Type: GrantFiled: December 11, 2020Date of Patent: April 25, 2023Assignee: UNITY TECHNOLOGIES SFInventors: Dejan Momcilovic, Jake Botting
-
Patent number: 11627297Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, stereoscopic image data of the live action scene is received, and display wall metadata of the precursor image is determined. Further, a first portion of the stereoscopic image data comprising the stage element in the live action scene is determined based on the stereoscopic image data and the display wall metadata. A second portion of the stereoscopic image data comprising the display wall in the live action scene with the display wall displaying the precursor image is also determined. Thereafter, an image matte for the stereoscopic image data is generated based on the first portion and the second portion.Type: GrantFiled: December 10, 2021Date of Patent: April 11, 2023Assignee: Unity Technologies SFInventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
-
Patent number: 11625848Abstract: Embodiments provide multi-angle screen coverage analysis. In some embodiments, a system obtains a computer graphics generated image having at least one target object for analysis. The system determines screen coverage information and depth information for the at least one target object. The system then determines an asset detail level for the at least one target object based on the screen coverage information and the depth information. The system then stores the asset detail level in a database, and makes the asset detail level available to users.Type: GrantFiled: October 15, 2020Date of Patent: April 11, 2023Assignee: UNITY TECHNOLOGIES SFInventor: Kenneth Gimpelson
-
Patent number: 11625882Abstract: A method for generating one or more visual representations of a porous media submerged in a fluid is provided. The method can be performed using a computing device operated by a computer user or artist. The method includes defining a field comprising fluid parameter values for the fluid, the fluid parameter values comprising fluid velocity values and pore pressures. The method includes generating a plurality of particles that model a plurality of objects of the porous media, the plurality of objects being independently movable with respect to one another, determining values of motion parameters based at least in part on the field when the plurality of particles are submerged in the fluid, buoyancy and drag forces being used to determine relative motion of the plurality of particles and the fluid, and generating the one or more visual representations of the plurality of objects submerged in the fluid based on the values of the motion parameters.Type: GrantFiled: November 10, 2021Date of Patent: April 11, 2023Assignee: Unity Technologies SFInventors: Alexey Stomakhin, Joel Wretborn, Gilles Daviet
-
Patent number: 11625900Abstract: A first set of instance layer data that describes a scene to be represented by one or more computer-generated images is obtained. The set of instance layer data specifies a plurality of object instances within the scene, with each instance of the plurality of object instances corresponding to a position that an instance of a digital object is to appear in the scene. The set of instance layer data further specifies a first set of characteristics of the plurality of object instances that includes the position. A second set of instance layer data that indicates changes to be made to the scene described by the first set of instance layer data is obtained. A third set of instance layer data is generated to include the changes to the scene by overlaying the second set of instance layer data onto the first set of instance layer data. The scene is caused to be rendered by providing the third set of instance layer data to an instancing service.Type: GrantFiled: October 22, 2020Date of Patent: April 11, 2023Assignee: Unity Technologies SFInventors: Nick S. Shore, Oliver M. Castle, Timothy E. Murphy
-
Patent number: 11620765Abstract: Embodiments provide for automated detection of a calibration object within a recorded image. In some embodiments, a system receives an original image from a camera, wherein the original image includes at least a portion of a calibration chart. The system further derives a working image from the original image. The system further determines regions in the working image, wherein each region comprises a group of pixels having values within a predetermined criterion. The system further analyzes two or more of the regions to identify a candidate calibration chart in the working image. The system further identifies at least one region within the candidate calibration chart as a patch. The system further predicts a location of one or more additional patches based on at least the identified patch.Type: GrantFiled: October 7, 2020Date of Patent: April 4, 2023Assignee: UNITY TECHNOLOGIES SFInventor: Peter Hillman
-
Patent number: 11615755Abstract: The disclosed system modifies luminance of a display associated with a selective screen. The display provides a camera with an image having resolution higher than the resolution of the display by presenting multiple images while the selective screen enables light from different portions of the multiple images to reach the camera. The resulting luminance of the recorded image is lower than a combination of luminance values of the multiple images. The processor obtains a criterion indicating a property of the input image where image detail is unnecessary. The processor detects a region of the input image satisfying the criterion, and determines a region of the selective screen corresponding to the region of the input image. The processor increases the luminance of the display by disabling the region of the selective screen corresponding to the region of the input image.Type: GrantFiled: August 8, 2022Date of Patent: March 28, 2023Assignee: Unity Technologies SFInventors: Joseph W. Marks, Luca Fascione, Kimball D. Thurston, III, Millie Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman, Jonathan S. Swartz, Carter Bart Sullivan
-
Patent number: 11605171Abstract: A compositor generates a representation of a gradient image corresponding to a replaceable background object in a scene. The representation can be generated from user set of reference points in the scene.Type: GrantFiled: March 5, 2021Date of Patent: March 14, 2023Assignee: Unity Technologies SFInventor: Peter Hillman
-
Patent number: 11600022Abstract: Embodiments facilitate the calibration of cameras in a live action scene using drones. In some embodiments, a method configures a plurality of reference cameras to observe at least one portion of the live action scene. The method further configures one or more moving cameras having unconstrained motion to observe one or more moving objects in the live action scene and to observe at least three known reference points associated with the plurality of reference cameras. The method further receives reference point data in association with the one or more moving cameras, where the reference point data is based on the at least three known reference points. The method further computes a location and an orientation of each moving camera of the one or more moving cameras based on one or more of the reference point data and one or more locations of one or more reference cameras of the plurality of reference cameras.Type: GrantFiled: December 11, 2020Date of Patent: March 7, 2023Assignee: UNITY TECHNOLOGIES SFInventors: Dejan Momcilovic, Jake Botting
-
Patent number: 11600041Abstract: The system obtains an indication of a shape of a cross-section of an elongated shape, and an orientation of the shape. Based on the shape of the cross-section of the elongated shape and the orientation of the shape, the system creates a nonuniform distribution of random numbers mapping uniformly distributed input values to multiple points on the surface of the elongated shape. The system provides an input value randomly selected from a uniform distribution of random numbers to the nonuniform distribution of random numbers to obtain a point among the multiple sample points on the surface of the elongated shape. The system applies a function to the input value to obtain an indication of a normal associated with the sample point among the multiple sample points. Finally, the system computes an illumination of the elongated shape using the normal.Type: GrantFiled: December 8, 2021Date of Patent: March 7, 2023Assignee: Unity Technologies SFInventor: Andrea Weidlich
-
Patent number: 11593584Abstract: A computer-implemented method for processing a set of virtual fibers into a set of clusters of virtual fibers, usable for manipulation on a cluster basis in a computer graphics generation system, may include determining aspects for virtual fibers in the set of virtual fibers, determining similarity scores between the virtual fibers based on their aspects, and determining an initial cluster comprising the virtual fibers of the set of virtual fibers. The method may further include instantiating a cluster list in at least one memory, adding the initial cluster to the cluster list, partitioning the initial cluster into a first subsequent cluster and a second subsequent cluster based on similarity scores among fibers in the initial cluster, adding the first subsequent cluster and the second subsequent cluster to the cluster list, and testing whether a number of clusters in the cluster list is below a predetermined threshold.Type: GrantFiled: November 13, 2020Date of Patent: February 28, 2023Assignee: UNITY TECHNOLOGIES SFInventor: Olivier Gourmel
-
Patent number: 11593993Abstract: A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values to add or modify an image effect or a visual effect are determined, and the image data is adjusted using the pixel display values and the image matte.Type: GrantFiled: December 10, 2021Date of Patent: February 28, 2023Assignee: UNITY TECHNOLOGIES SFInventors: Kimball D. Thurston, III, Peter M. Hillman, Joseph W. Marks, Luca Fascione, Millicent Lillian Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller
-
Patent number: 11587278Abstract: Embodiments described herein provide an approach of animating a character face of an artificial character based on facial poses performed by a live actor. Geometric characteristics of the facial surface corresponding to each facial pose performed the live actor may be learnt by a machine learning system, which in turn build a mesh of a facial rig of an array of controllable elements applicable on a character face of an artificial character.Type: GrantFiled: August 16, 2021Date of Patent: February 21, 2023Assignee: UNITY TECHNOLOGIES SFInventors: Wan-duo Kurt Ma, Muhammad Ghifary
-
Patent number: 11562522Abstract: An aspect provides a computer-implemented method for compiling software code. The method comprises: receiving software code to compile; receiving a set of parameters associated with settings and software employed to compile the software code; forming a first hash of the set of parameters to establish a unique identification of the set of parameters used to compile the software code; and associating the first hash with the compiled code. A further aspect provides a computer-implemented method of checking compatibility of compiled software code.Type: GrantFiled: April 15, 2021Date of Patent: January 24, 2023Assignee: UNITY TECHNOLOGIES SFInventors: Florian Deconinck, Sander van der Steen, Richard Chi Lei, Adam Christensen, Niall J. Lenihan
-
Patent number: 11537162Abstract: Embodiments provide a wearable article for a performance capture system. In some embodiments, a wearable article includes one or more regions, where the one or more regions are configured to be worn on at least a portion of a body of a user, where the one or more regions have a first pliability and a second pliability, where the first pliability and the second pliability are different pliabilities, and where at least one of the one or more regions are configured to hold devices in predetermined positions while maintaining shape and respective pliability. In some embodiments, the wearable article also includes a plurality of mounting mechanisms coupled to the one or more regions for mounting one or more reference markers to be used for position determination.Type: GrantFiled: April 30, 2021Date of Patent: December 27, 2022Assignee: UNITY TECHNOLOGIES SFInventors: Dejan Momcilovic, Jake Botting
-
Patent number: 11514654Abstract: Methods and systems are presented for determining a virtual focus model for a camera apparatus, the camera apparatus comprising one or more image capture elements and one or more optics device through which light in an optical path passes from a stage environment to at least one of the one or more image capture elements, the stage environment including virtual scene display for displaying a virtual scene.Type: GrantFiled: December 9, 2021Date of Patent: November 29, 2022Assignee: Unity Technologies SFInventors: Kimball D. Thurston, III, Joseph W. Marks, Luca Fascione, Millicent Maier, Kenneth Gimpelson, Dejan Momcilovic, Keith F. Miller, Peter M. Hillman
-
Patent number: 11508081Abstract: A sealed active marker apparatus of a performance capture system is described to provide protective housing for active marker light components coupled to a strand and attached via a receptacle, to an object, such as via a wearable article, in a live action scene. The receptacle includes a protrusion portion that permits at least one particular wavelength range of light emitted from the enclosed active marker light component, to diffuse in a manner that enables easy detection by a sensor device. A base portion interlocks with a bottom plate of the receptacle to secure the strand within one or more channels. A sealant material coating portions of the apparatus promotes an insulating environment for the active marker light component.Type: GrantFiled: November 30, 2020Date of Patent: November 22, 2022Assignee: UNITY TECHNOLOGIES SFInventors: Dejan Momcilovic, Jake Botting
-
Patent number: 11508108Abstract: A method for generating one or more visual representations of a porous media submerged in a fluid is provided. The method can be performed using a computing device operated by a computer user or artist. The method includes defining a field comprising fluid parameter values for the fluid, the fluid parameter values comprising fluid velocity values and pore pressures. The method includes generating a plurality of particles that model a plurality of objects of the porous media, the plurality of objects being independently movable with respect to one another, determining values of motion parameters based at least in part on the field when the plurality of particles are submerged in the fluid, buoyancy and drag forces being used to determine relative motion of the plurality of particles and the fluid, and generating the one or more visual representations of the plurality of objects submerged in the fluid based on the values of the motion parameters.Type: GrantFiled: June 15, 2021Date of Patent: November 22, 2022Assignee: Unity Technologies SFInventors: Alexey Stomakhin, Joel Wretborn, Gilles Daviet