Contour lines for volumetric objects
Systems and methods automatically generate contours on an illustrated object for performing an animation. Contour lines are generated on the surface of the object according to criteria related to the shape of the surface of the object. Points of the contour lines that are occluded from a virtual camera are identified. The occluded points are removed to generate visible lines. The visible lines are extruded to define a three-dimensional volume defining contours of the object. The object itself, along with the three-dimensional volume, are illuminated and rendered. The parameters defining the opacity and color of the contour may differ from corresponding parameters of the rest of the object, so that the contours stand out and define portions of the object. The contours are useful in contexts such as defining areas of an object that is fuzzy or cloudy in appearance, as well as creating certain artistic effects.
Latest PIXAR Patents:
With the widespread availability of computers, computer graphics artists and animators can rely upon computers to assist in production process for creating animations and computer-generated imagery (CGI). This may include using computers to have physical models be represented by virtual models in computer memory. Typically, two-dimensional (2D) or three-dimensional (3D) computer-aided animation combines 2D/3D models of objects and programmed movement of one or more of the models. In 3D computer animation, the first step is typically the object modeling process. Virtual objects can be sculpted much like real clay or plaster, working from general forms to specific details, for example, with various sculpting tools. Models may then be constructed, for example, out of geometrical vertices, faces, and edges in a 3D coordinate system to represent the virtual objects. These models can then be manipulated using computers to, for example, simulate physics, design aesthetic actions such as poses or other deformations, crate lighting, coloring and paint, or the like, of characters or other elements of a computer animation display.
One core functional aspect of computer graphics is to convert geometric and/or mathematical descriptions of objects into images. This process is known in the industry as “rendering.” For movies, other animated features, shorts, and special effects, a user (e.g., a skilled computer graphics artist) can specify the geometric or mathematical description of objects such as characters, props, backgrounds, or the like, as models to be used in the rendered image or animation sequence. In some instances, the geometric description of an object may include a number of animation control variables (avars) and values for the avars. An animator may also pose the objects within the image or sequence and specify motions and positions of the objects over time to create an animation.
As such, the production of CGI and computer-aided animation may involve the extensive use of various computer graphics techniques to produce a visually appealing image from the geometric description of an object that may be used to convey an essential element of a story or provide a desired special effect. One of the challenges in creating these visually appealing images can be the balancing of a desire for a highly-detailed image of a character or other object with the practical issues involved in allocating the resources (both human and computational) required to produce those visually appealing images.
In particular, there are situations where objects are not clearly defined. For example, certain objects have an opacity level or texture that is similar to a cloud, where a surface is not well-defined. In such instances, certain features of the objects cannot always be seen clearly as a result of illumination being a statistical process (e.g., light scattering may not always define an overlapping portion of an object clearly). In other cases, an artist may wish to create an artistic effect with line work on the object. Such line work can be a laborious task to manually replicate as the object moves in the animation process. Accordingly, there is a need to overcome the drawbacks and deficiencies in the art.
BRIEF SUMMARYThe following portion of this disclosure presents a simplified summary of one or more innovations, embodiments, and/or examples found within this disclosure for at least the purpose of providing a basic understanding of the subject matter. This summary does not attempt to provide an extensive overview of any particular embodiment or example. Additionally, this summary is not intended to identify key/critical elements of an embodiment or example or to delineate the scope of the subject matter of this disclosure. Accordingly, one purpose of this summary may be to present some innovations, embodiments, and/or examples found within this disclosure in a simplified form as a prelude to a more detailed description presented later.
Systems, devices, and methods are provided for rendering volumetric objects with contours. As part of generating an animated object, an artist can automatically generate contours so as to define and provide a uniform look to the object, even when the object does not have a well-defined surface (e.g., like a cloud). Then, during animation, the rig of the object can be used to automatically move the object, and the contours can be automatically generated.
In some embodiments, a method for performing animation includes performing steps by a computer system including receiving a polygonal mesh defining a surface of an object. The computer system generates contour lines on the surface of the object according to one or more criteria. The computer system identifies one or more points of the contour lines that are occluded from a virtual camera. The computer system removes the one or more points from the contour lines, thereby generating visible lines. The computer system generates contour ribbons by extending the visible lines in a first direction relative to the virtual camera. The computer system extrudes faces of the contour ribbons in a second direction relative to the virtual camera, thereby generating a first three-dimensional volume. The computer system generates a texture for the surface of the object, the texture defining a second three-dimensional volume that has a density gradient. The computer system renders, using a virtual light source, the object by ray tracing between the virtual light source and the first three-dimensional volume and by ray tracing between the virtual light source and the second three-dimensional volume.
In some aspects, the contour lines are generated based on a location of the virtual camera relative to the object. In some aspects, the identifying a first point as occluded includes shifting the first point from the surface by a specified distance, tracing a ray from the first point to the virtual camera, and identifying the first point as occluded when the ray hits the surface of the object. In some aspects, the removing the one or more points from the contour lines generates contour fragments and the method further includes projecting the contour fragments onto a focal plane of the virtual camera and merging any contour fragments that are within a specified distance of each other.
In some aspects, the first direction is lateral relative to the virtual camera and the second direction defines a depth relative to the virtual camera. In some aspects, method includes tapering ends of the contour ribbons. In some aspects, the one or more criteria include one or more of identifying points on the surface of the object that have a surface normal perpendicular to a viewing vector or identifying points of self-intersection of the surface of the object.
In some aspects, the second three-dimensional volume is fuzzy and the first three-dimensional volume defines a shape of the second three-dimensional volume. In some aspects, identifying the one or more points of the contour lines as occluded includes identifying configured surface attributes. In some aspects, the object is a character and the first three-dimensional volume defines body parts of the character.
These and other embodiments of the invention are described in detail below. For example, other embodiments are directed to systems, devices, and computer readable media associated with methods described herein.
A further understanding of the nature of and equivalents to the subject matter of this disclosure (as well as any inherent or express advantages and improvements provided) should be realized in addition to the above section by reference to the remaining portions of this disclosure, any accompanying drawings, and the claims.
In order to reasonably describe and illustrate those innovations, embodiments, and/or examples found within this disclosure, reference may be made to one or more accompanying drawings. The additional details or examples used to describe the one or more accompanying drawings should not be considered as limitations to the scope of any of the claimed inventions, any of the presently described embodiments and/or examples, or the presently understood best mode of any innovations presented within this disclosure.
Techniques are provided to automatically generate contours on an object for performing an animation. Contour lines can be automatically drawn according to certain criteria. Points of the contour lines that are occluded from a virtual camera can be identified. The occluded points can be removed to generate visible lines. The visible lines can be extruded to define a three-dimensional volume defining contours of the object. The object itself, along with the three-dimensional volume, can then be illuminated and rendered. The contours are useful in contexts such as defining areas of an object (e.g., if the object is cloudy or not opaque) and creating certain artistic effects (e.g., a halo effect). During animation, the rig of the character can be used to automatically move the character and the contour is automatically generated.
I. Example Volumetric Objects with Contours
To better define the boundaries of different parts of the volumetric characters 102 and 104, contours 106 can be placed in strategic positions on or around the volumetric characters 102 and 104. As shown, contours 106 are placed around the hands of volumetric character 104, which helps define the hands and arms when overlapping other parts of the body. Contours 106 can also be placed around other regions for aesthetic purposes. For example, in the volumetric characters 102 and 104, contours 108 are placed around the face to provide a halo effect. Thus, the contours 106 and 108 can help define an object and/or provide an artistic effect.
II. Computer System for Contour Volume Modeling
The processor 202 may receive as input an object model 201. The object model 201 can include a polygonal mesh defining a surface of an object. The object to be modeled can be a human or animal character or another object such as a car, a tree, a piece of furniture, etc. The object model 201 may be received from another component that is included in, or communicatively coupled to, the computer system 200 (e.g., the object modeling system 1530, as described in further detail below with respect to
The contour generation component 204 can be configured to generate contours based on the received object model 201. The contour generation component 204 generates contour lines on the surface of the object according to one or more criteria. The criteria can include tracing contours along a surface based on a surface normal perpendicular to a viewing vector. Alternatively, or additionally, the criteria can include generating contours based on identified points of self-intersection of the surface of an object, or other suitable techniques.
The contour modification component 206 can be configured to modify contours as generated by the contour generation component 204. In some implementations, the contour modification component 206 modifies a contour by generating visible lines. Portions of the contour lines may or may not be visible based on the positioning of different parts of the object with respect to a virtual camera and light source. For example, in
The contour modification component 206 may further modify the contour lines by defragmenting visible lines. In generating visible lines, the contour lines may be broken into contour fragments, as only certain portions of a given contour line are flagged as visible. The visible lines are defragmented based on a distance between the contour fragments. The defragmentation process can include projecting fragments to the camera canvas and determining a two-dimensional distance between the fragments. If the two-dimensional distance is less than a threshold, then the fragments are merged.
The ribbon generation component 208 can be configured to generate a two-dimensional ribbon along visible lines. Line segments can be duplicated along the defragmented visible lines produced by the contour modification component 206. The line segments can vary in length based on configured tapering parameters. The ribbon generation component 208 can connect the segments to create a flat ribbon. The ribbons are two dimensional thickened lines that extend through three-dimensional space. The ribbons may have artistic tapering effects.
The volume generation component 210 can be configured to generate three-dimensional volumes based on the two-dimensional ribbons generated by the ribbon generation component 208. The ribbons may be extruded based on density and tapering values of the ribbons. The volume generation component 210 may convert the extruded geometry into a volumetric density field.
The rendering component 212 can be configured to render a three-dimensional object with contour lines, according to the present disclosure. The rendering component 212 is configured to render or generate computer-generated images. In some aspects, the rendering component 212 may include, or be communicatively coupled to, the object rendering system(s) 1570 described below with respect to
The rendering component 212 is configured to render an object with contour lines. In some embodiments, the object has some degree of transparency, translucency, and/or fogging. The contour lines may have different levels of reflectivity and transparency, to help define a translucent or foggy object. The rendering component 212 may apply different rendering parameters to the contour volumes and the rest of the object to create this effect, as described herein.
III. Methods for Generating and Rendering Contour Volumes
A. Generating a Contour Volume
In some embodiments, the method 300 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of method 300 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 300.
At step 302, the computer system receives a polygonal mesh defining a surface of an object. In some embodiments, the computer system 200 depicted in
In some implementations, the mesh is initialized by generating a recursively generated B-spline surface, also known as a Catmull-Clark mesh surface. (See Catmull, E. and Clark, J., “Recursively Generated B-Spline Surfaces on Arbitrary Topological Meshes,” Computer-Aided Design 10, 6 (1978)). The base mesh may then be subdivided into polygons, such as triangles or quadrilaterials, and vertices of the polygons are moved to their limit positions. (See Halstead, M., Kass, M., and Derose, T., “Efficient, Fair Interpolation Using Catmull-Clark Surfaces, in Proc. SIGGRAPH, 35-44 (1993)).
At step 304, the computer system (e.g., the contour generation component 204 depicted in
In some implementations, the contour lines are silhouette contours and the criteria include whether a given point on the surface of the mesh has a surface normal perpendicular to a viewing vector. Such a geometry corresponds to a point orthogonal to a virtual camera from which the viewing vector may originate. In some aspects, a point p is a vertex of the mesh. The computer system may identify vertices of the mesh. As illustrated in
If the surface normal {right arrow over (n)} is perpendicular to {right arrow over (v)} for a given point p, then that point p is selected as a point on a contour line. The computer system may determine whether the surface normal {right arrow over (n)} is perpendicular to {right arrow over (v)} by computing the dot product of the vector {right arrow over (v)} and the normal {right arrow over (n)}, which provides a measure of how aligned these two vectors are. When {right arrow over (n)}·{right arrow over (v)} is equal to zero, then {right arrow over (n)} is normal to {right arrow over (v)}. These points at which the dot product is equal to zero represent points at which the camera is orthogonal to the edge of the object.
The computer system may identify this {right arrow over (n)}·{right arrow over (v)} value at a series of points on the surface of the object (e.g., at each vertex on the mesh). Each vertex will then have an associated value. If the value is equal to zero, then the point is selected for generating the contour line. This process may be repeated for each vertex on the mesh, resulting in a set of selected points. The computer system traces contour lines along the surface of the object by connecting the selected points with respective surface normal perpendiculars to the respective viewing vectors. In other words, the system traces the zero-level set of a function describing the dot product {right arrow over (n)}·{right arrow over (v)} across the surface of the object. The resulting contour lines separate regions of the object that are facing the camera and regions of the object that are not facing the camera. In other words, the contour lines divide regions that are and are not visible to the camera.
Alternatively, or additionally, the contour lines are Boolean curves and the criteria include identifying points at which the surface of the object self-intersects. The computer system may identify points at which the surface self-intersects by computing a parametric expression for the surface and solving an equation to identify the points of self-intersection, as described in Pekerman et. al, “Self-Intersection Detection and Elimination in Freeform Curves and Surfaces,” Computer Aided Design 40, 150-159 (2008).
The silhouette contours and Boolean curves may be used individually or in combination to generate the contour lines. Alternatively, or additionally, contour generation techniques such as mesh contours, ray-tracing contours, or planar map methods may be used. Contour generation techniques are described in further detail in Benard et al., “Computing Smooth Surface Contours with Accurate Topology,” ACM Transactions on Graphics, Vol. 33, No. 2. (2014).
At step 306, the computer system (e.g., the contour modification component 206 shown in
In some aspects, each ray is traced from a point offset from the surface of the object. The system shifts the point identified at step 304 from the surface by a specified distance. The specified distance may be some small amount, so that the ray originates just off of the surface of the object. The offset may be along the outer surface normal of the object. The ray is then traced from this offset point to the camera location, and the original point identified at step 304 is flagged as occluded if the ray hits the surface of the object. By biasing the origin of the ray by an offset from the surface of the object, the computer system can avoid having the ray hit the surface due to originating on the surface, and avoid unwanted gaps in the resulting visible lines.
Alternatively, or additionally, identifying points of the contour lines as occluded includes identifying configured surface attributes. For example, an artist may configure parameters establishing how high up an arm of a character a contour line extends. Points may be flagged as occluded based on surface attributes painted by an artist. These parameters are then applied to occlude points corresponding to the upper part of the arm above the configured level.
In some implementations, the occluded points can be modified based on additional illumination sources. A virtual spotlight is established. The flag values marking a particular point as occluded are overwritten if the points are illuminated by the virtual spotlight. For example, as shown in
At step 308, the computer system (e.g., the contour modification component 206 shown in
In some implementations, the contour fragments are merged based on certain criteria. For example, the computer system can project the contour fragments onto a focal plane of the virtual camera. A two-dimensional distance between the endpoints of the fragments is determined based on the projection. This determined distance is compared to a specified distance, which may be user configured or automatically selected. The suitable specified distance may vary depending on the size of the object and the desired appearance. The computer system can merge any contour fragments that are within the specified distance of each other. This process can be used to ensure that, even if contour lines are separated in three dimensions, the contour lines are not fragmented in two dimensions. This improves the appearance of the contours, particularly when the contours are tapered as described below with respect to step 310.
At step 310, the computer system (e.g., the ribbon generation component 208 depicted in
In some implementations, the visible lines are duplicated point-by-point. For each point in the visible line, the point is copied, and the copy is placed in a position according to the first direction. The size of the displacement may correspond to a desired thickness for the ribbon. When repeated across the points in the visible line, this results in two lines. The two lines are then connected to generate a ribbon. In some implementations, each point is connected to a copied point using a polygonal mesh, resulting in a mesh for the contour ribbon.
In some embodiments, the computer system tapers ends of the contour ribbons. The length of each of the contour ribbons may be modified based on configured tapering parameters. For example, an illustrator may configure that the end of each contour ribbon, or some of the contour ribbons, taper from the width of the ribbon to a point. A desired falloff may be added along the width of the contour line. Examples of tapered ribbons are shown in
At step 312, the computer system (e.g., the volume generation component 210 depicted in
In some implementations, the contour ribbons are extruded based on density and tapering values that are configured by an artist. The extruded geometry may then be converted into a volumetric density field. The result is a set of volume contours, as illustrated in
B. Rendering an Object with Contours
In some embodiments, the method depicted in method 400 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of method 400 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 400.
The method of
At step 402, the computer system (e.g., the rendering component 212 depicted in
In some applications, the three-dimensional volume of the object may further have a density that varies within the object. In some aspects, a density parameter p of the three-dimensional volume of the object is varying such that the three-dimensional volume of the object has a density gradient.
In some embodiments, another set of material parameters is assigned to the first three-dimensional volume corresponding to the contours. For example, the parameters defining the second three-dimensional volume correspond to a fuzzy or cloudy object, and the first three-dimensional volume corresponding to the contours is relatively opaque, so that the contours can define edges of the more foggy object.
At step 404, the computer system (e.g., the rendering component 212 depicted in
In the ray tracing process, each ray is followed from a point in the first three-dimensional volume, and the system tracks if and where the ray interacts with other points before reaching the virtual light source. If the ray strikes an established volume, or if the ray reflects or refracts between multiple points in a volume, that data is represented in the light and color assigned to that point (e.g., to a pixel in the ultimate generated image). The system calculates a color of the contour volume at each point for use in rendering the contour volume. This ray tracing process establishes how to illuminate the volume by identifying how light refracts and reflects from the volume. Techniques for volumetric rendering are described in detail in Fong et al., “Production Volume Rendering,” SIGGRAPH 2017.
At step 406, the computer system (e.g., the rendering component 212 depicted in
In some cases, the ray tracing on the first and second three-dimensional volumes is performed substantially simultaneously. For example, a ray is projected, and the ray hits the object volume first then the contour volume, or vice versa. The ray may disperse within the object volume. Some of the dispersed rays will leave the surface, while other dispersed rays will go back into the second volume defining the object, and other rays will hit the contour volumes and then disperse from the contour volumes.
In some implementations, the second-three dimensional volume is fuzzy. A fuzzy volume may have a blurry, soft appearance, such that edges are not well-defined, particularly in overlapping regions. The fuzzy appearance can be a result of transparency, texture, variations in the density gradient, and/or filters. Alternatively or additionally, the fuzzy appearance can be created with a particle approach for foggy objects, cloudy objects, or other types of volumetric fluids such as water. (See, e.g., U.S. Pat. No. 10,282,885, which is incorporated by reference). The second three-dimensional volume defines the object. For example, as shown in
At step 408, the computer system renders the object using the virtual light source. The computer system (e.g., the rendering component 212 depicted in
The methods 300 and 400 described above may be repeated across multiple images which are animated. During animation, the rig of the character can be used to automatically move the character. For each animation frame, the contour volumes are automatically generated. Thus, these techniques can be used to automatically render contours for an animation.
IV. Contour Rendering Pipeline
A. Contour Line Generation
B. Visible Line Generation and Modification
Alternatively, or additionally, the visible lines may be modified using a defragmentation process to omit small inadvertent gaps in the visible lines.
C. Contour Ribbon Generation
As shown in
D. Contour Volume Rendering
E. Halo Examples
One example application of an artistic application of contours is creating a halo effect. This can be achieved by generating halo ribbons as depicted in
As described above with respect to
V. Computer Systems
The one or more design computers 1510 can include hardware and software elements configured for designing CGI and assisting with computer-aided animation. Each of the one or more design computers 1510 may be embodied as a single computing device or a set of one or more computing devices. Some examples of computing devices are PCs, laptops, workstations, mainframes, cluster computing system, grid computing systems, cloud computing systems, embedded devices, computer graphics devices, gaming devices and consoles, consumer electronic devices having programmable processors, or the like. The one or more design computers 1510 may be used at various stages of a production process (e.g., pre-production, designing, creating, editing, simulating, animating, rendering, post-production, etc.) to produce images, image sequences, motion pictures, video, audio, or associated effects related to CGI and animation.
In one example, a user of the one or more design computers 1510 acting as a modeler may employ one or more systems or tools to design, create, or modify objects within a computer-generated scene. The modeler may use modeling software to sculpt and refine a neutral 3D model to fit predefined aesthetic needs of one or more character designers. The modeler may design and maintain a modeling topology conducive to a storyboarded range of deformations. In another example, a user of the one or more design computers 1510 acting as an articulator may employ one or more systems or tools to design, create, or modify controls or animation variables (avars) of models. In general, rigging is a process of giving an object, such as a character model, controls for movement, therein “articulating” its ranges of motion. The articulator may work closely with one or more animators in rig building to provide and refine an articulation of the full range of expressions and body movement needed to support a character's acting range in an animation. In a further example, a user of design computer 1510 acting as an animator may employ one or more systems or tools to specify motion and position of one or more objects over time to produce an animation.
Object library 1520 can include elements configured for storing and accessing information related to objects used by the one or more design computers 1510 during the various stages of a production process to produce CGI and animation. Some examples of object library 1520 can include a file, a database, or other storage devices and mechanisms. Object library 1520 may be locally accessible to the one or more design computers 1510 or hosted by one or more external computer systems.
Some examples of information stored in object library 1520 can include an object itself, metadata, object geometry, object topology, rigging, control data, animation data, animation cues, simulation data, texture data, lighting data, shader code, or the like. An object stored in object library 1520 can include any entity that has an n-dimensional (e.g., 2D or 3D) surface geometry. The shape of the object can include a set of points or locations in space (e.g., object space) that make up the object's surface. Topology of an object can include the connectivity of the surface of the object (e.g., the genus or number of holes in an object) or the vertex/edge/face connectivity of an object.
The one or more object modeling systems 1530 can include hardware and/or software elements configured for modeling one or more objects. Modeling can include the creating, sculpting, and editing of an object. In various embodiments, the one or more object modeling systems 1530 may be configured to generated a model to include a description of the shape of an object. The one or more object modeling systems 1530 can be configured to facilitate the creation and/or editing of features, such as non-uniform rational B-splines or NURBS, polygons and subdivision surfaces (or SubDivs), that may be used to describe the shape of an object. In general, polygons are a widely used model medium due to their relative stability and functionality. Polygons can also act as the bridge between NURBS and SubDivs. NURBS are used mainly for their ready-smooth appearance and generally respond well to deformations. SubDivs are a combination of both NURBS and polygons representing a smooth surface via the specification of a coarser piecewise linear polygon mesh. A single object may have several different models that describe its shape.
The one or more object modeling systems 1530 may further generate model data (e.g., 2D and 3D model data) for use by other elements of system 1500 or that can be stored in object library 1520. The one or more object modeling systems 1530 may be configured to allow a user to associate additional information, metadata, color, lighting, rigging, controls, or the like, with all or a portion of the generated model data.
The one or more object articulation systems 1540 can include hardware and/or software elements configured to articulating one or more computer-generated objects. Articulation can include the building or creation of rigs, the rigging of an object, and the editing of rigging. In various embodiments, the one or more articulation systems 1540 can be configured to enable the specification of rigging for an object, such as for internal skeletal structures or eternal features, and to define how input motion deforms the object. One technique is called “skeletal animation,” in which a character can be represented in at least two parts: a surface representation used to draw the character (called the skin) and a hierarchical set of bones used for animation (called the skeleton).
The one or more object articulation systems 1540 may further generate articulation data (e.g., data associated with controls or animations variables) for use by other elements of system 1500 or that can be stored in object library 1520. The one or more object articulation systems 1540 may be configured to allow a user to associate additional information, metadata, color, lighting, rigging, controls, or the like, with all or a portion of the generated articulation data.
The one or more object animation systems 1550 can include hardware and/or software elements configured for animating one or more computer-generated objects. Animation can include the specification of motion and position of an object over time. The one or more object animation systems 1550 may be invoked by or used directly by a user of the one or more design computers 1510 and/or automatically invoked by or used by one or more processes associated with the one or more design computers 1510.
In various embodiments, the one or more animation systems 1550 may be configured to enable users to manipulate controls or animation variables or utilized character rigging to specify one or more key frames of animation sequence. The one or more animation systems 1550 generate intermediary frames based on the one or more key frames. In some embodiments, the one or more animation systems 1550 may be configured to enable users to specify animation cues, paths, or the like according to one or more predefined sequences. The one or more animation systems 1550 generate frames of the animation based on the animation cues or paths. In further embodiments, the one or more animation systems 1550 may be configured to enable users to define animations using one or more animation languages, morphs, deformations, or the like.
The one or more object animations systems 1550 may further generate animation data (e.g., inputs associated with controls or animations variables) for use by other elements of system 1500 or that can be stored in object library 1520. The one or more object animations systems 1550 may be configured to allow a user to associate additional information, metadata, color, lighting, rigging, controls, or the like, with all or a portion of the generated animation data.
The one or more object simulation systems 1560 can include hardware and/or software elements configured for simulating one or more computer-generated objects. Simulation can include determining motion and position of an object over time in response to one or more simulated forces or conditions. The one or more object simulation systems 1560 may be invoked by or used directly by a user of the one or more design computers 1510 and/or automatically invoked by or used by one or more processes associated with the one or more design computers 1510.
In various embodiments, the one or more object simulation systems 1560 may be configured to enables users to create, define, or edit simulation engines, such as a physics engine or physics processing unit (PPU/GPGPU) using one or more physically-based numerical techniques. In general, a physics engine can include a computer program that simulates one or more physics models (e.g., a Newtonian physics model), using variables such as mass, velocity, friction, wind resistance, or the like. The physics engine may simulate and predict effects under different conditions that would approximate what happens to an object according to the physics model. The one or more object simulation systems 1560 may be used to simulate the behavior of objects, such as hair, fur, and cloth, in response to a physics model and/or animation of one or more characters and objects within a computer-generated scene.
The one or more object simulation systems 1560 may further generate simulation data (e.g., motion and position of an object over time) for use by other elements of system 1500 or that can be stored in object library 1520. The generated simulation data may be combined with or used in addition to animation data generated by the one or more object animation systems 1550. The one or more object simulation systems 1560 may be configured to allow a user to associate additional information, metadata, color, lighting, rigging, controls, or the like, with all or a portion of the generated simulation data.
The one or more object rendering systems 1570 can include hardware and/or software element configured for “rendering” or generating one or more images of one or more computer-generated objects. “Rendering” can include generating an image from a model based on information such as geometry, viewpoint, texture, lighting, and shading information. The one or more object rendering systems 1570 may be invoked by or used directly by a user of the one or more design computers 1510 and/or automatically invoked by or used by one or more processes associated with the one or more design computers 1510. One example of a software program embodied as the one or more object rendering systems 1570 can include PhotoRealistic RenderMan, or PRMan, produced by Pixar Animations Studios of Emeryville, Calif.
In various embodiments, the one or more object rendering systems 1570 can be configured to render one or more objects to produce one or more computer-generated images or a set of images over time that provide an animation. The one or more object rendering systems 1570 may generate digital images or raster graphics images.
In various embodiments, a rendered image can be understood in terms of a number of visible features. Some examples of visible features that may be considered by the one or more object rendering systems 1570 may include shading (e.g., techniques relating to how the color and brightness of a surface varies with lighting), texture-mapping (e.g., techniques relating to applying detail information to surfaces or objects using maps), bump-mapping (e.g., techniques relating to simulating small-scale bumpiness on surfaces), fogging/participating medium (e.g., techniques relating to how light dims when passing through non-clear atmosphere or air) shadows (e.g., techniques relating to effects of obstructing light), soft shadows (e.g., techniques relating to varying darkness caused by partially obscured light sources), reflection (e.g., techniques relating to mirror-like or highly glossy reflection), transparency or opacity (e.g., techniques relating to sharp transmissions of light through solid objects), translucency (e.g., techniques relating to highly scattered transmissions of light through solid objects), refraction (e.g., techniques relating to bending of light associated with transparency), diffraction (e.g., techniques relating to bending, spreading and interference of light passing by an object or aperture that disrupts the ray), indirect illumination (e.g., techniques relating to surfaces illuminated by light reflected off other surfaces, rather than directly from a light source, also known as global illumination), caustics (e.g., a form of indirect illumination with techniques relating to reflections of light off a shiny object, or focusing of light through a transparent object, to produce bright highlights on another object), depth of field (e.g., techniques relating to how objects appear blurry or out of focus when too far in front of or behind the object in focus), motion blur (e.g., techniques relating to how objects appear blurry due to high-speed motion, or the motion of the camera), non-photorealistic rendering (e.g., techniques relating to rendering of scenes in an artistic style, intended to look like a painting or drawing), or the like.
The one or more object rendering systems 1570 may further render images (e.g., motion and position of an object over time) for use by other elements of system 1500 or that can be stored in object library 1520. The one or more object rendering systems 1570 may be configured to allow a user to associate additional information or metadata with all or a portion of the rendered image.
Computer system 1600 may include familiar computer components, such as one or more one or more data processors or central processing units (CPUs) 1605, one or more graphics processors or graphical processing units (GPUs) 1610, memory subsystem 1615, storage subsystem 1620, one or more input/output (I/O) interfaces 1625, communications interface 1630, or the like. Computer system 1600 can include system bus 1635 interconnecting the above components and providing functionality, such connectivity and inter-device communication
The one or more data processors or central processing units (CPUs) 1605 can execute logic or program code or for providing application-specific functionality. Some examples of CPU(s) 1605 can include one or more microprocessors (e.g., single core and multi-core) or micro-controllers, one or more field-gate programmable arrays (FPGAs), and application-specific integrated circuits (ASICs). As user herein, a processor includes a multi-core processor on a same integrated chip, or multiple processing units on a single circuit board or networked.
The one or more graphics processor or graphical processing units (GPUs) 1610 can execute logic or program code associated with graphics or for providing graphics-specific functionality. GPUs 1610 may include any conventional graphics processing unit, such as those provided by conventional video cards. In various embodiments, GPUs 1610 may include one or more vector or parallel processing units. These GPUs may be user programmable, and include hardware elements for encoding/decoding specific types of data (e.g., video data) or for accelerating 2D or 3D drawing operations, texturing operations, shading operations, or the like. The one or more graphics processors or graphical processing units (GPUs) 1610 may include any number of registers, logic units, arithmetic units, caches, memory interfaces, or the like.
Memory subsystem 1615 can store information, e.g., using machine-readable articles, information storage devices, or computer-readable storage media. Some examples can include random access memories (RAM), read-only-memories (ROMS), volatile memories, non-volatile memories, and other semiconductor memories. Memory subsystem 1615 can include data and program code 1640.
Storage subsystem 1620 can also store information using machine-readable articles, information storage devices, or computer-readable storage media. Storage subsystem 1620 may store information using storage media 1645. Some examples of storage media 1645 used by storage subsystem 1620 can include floppy disks, hard disks, optical storage media such as CD-ROMS, DVDs and bar codes, removable storage devices, networked storage devices, or the like. In some embodiments, all or part of data and program code 1640 may be stored using storage subsystem 1620.
The one or more input/output (I/O) interfaces 1625 can perform I/O operations. One or more input devices 1650 and/or one or more output devices 1655 may be communicatively coupled to the one or more I/O interfaces 1625. The one or more input devices 1650 can receive information from one or more sources for computer system 1600. Some examples of the one or more input devices 1650 may include a computer mouse, a trackball, a track pad, a joystick, a wireless remote, a drawing tablet, a voice command system, an eye tracking system, external storage systems, a monitor appropriately configured as a touch screen, a communications interface appropriately configured as a transceiver, or the like. In various embodiments, the one or more input devices 1650 may allow a user of computer system 1600 to interact with one or more non-graphical or graphical user interfaces to enter a comment, select objects, icons, text, user interface widgets, or other user interface elements that appear on a monitor/display device via a command, a click of a button, or the like.
The one or more output devices 1655 can output information to one or more destinations for computer system 1600. Some examples of the one or more output devices 1655 can include a printer, a fax, a feedback device for a mouse or joystick, external storage systems, a monitor or other display device, a communications interface appropriately configured as a transceiver, or the like. The one or more output devices 1655 may allow a user of computer system 1600 to view objects, icons, text, user interface widgets, or other user interface elements. A display device or monitor may be used with computer system 1600 and can include hardware and/or software elements configured for displaying information.
Communications interface 1630 can perform communications operations, including sending and receiving data. Some examples of communications interface 1630 may include a network communications interface (e.g. Ethernet, Wi-Fi, etc.). For example, communications interface 1630 may be coupled to communications network/external bus 1660, such as a computer network, a USB hub, or the like. A computer system can include a plurality of the same components or subsystems, e.g., connected together by communications interface 1630 or by an internal interface. In some embodiments, computer systems, subsystem, or apparatuses can communicate over a network. In such instances, one computer can be considered a client and another computer a server, where each can be part of a same computer system. A client and a server can each include multiple systems, subsystems, or components.
Computer system 1600 may also include one or more applications (e.g., software components or functions) to be executed by a processor to execute, perform, or otherwise implement techniques disclosed herein. These applications may be embodied as data and program code 1640. Additionally, computer programs, executable computer code, human-readable source code, shader code, rendering engines, or the like, and data, such as image files, models including geometrical descriptions of objects, ordered geometric descriptions of objects, procedural descriptions of models, scene descriptor files, or the like, may be stored in memory subsystem 1615 and/or storage subsystem 1620.
Such programs may also be encoded and transmitted using carrier signals adapted for transmission via wired, optical, and/or wireless networks conforming to a variety of protocols, including the Internet. As such, a computer readable medium according to an embodiment of the present invention may be created using a data signal encoded with such programs. Computer readable media encoded with the program code may be packaged with a compatible device or provided separately from other devices (e.g., via Internet download). Any such computer readable medium may reside on or within a single computer product (e.g. a hard drive, a CD, or an entire computer system), and may be present on or within different computer products within a system or network. A computer system may include a monitor, printer, or other suitable display for providing any of the results mentioned herein to a user.
Any of the methods described herein may be totally or partially performed with a computer system including one or more processors, which can be configured to perform the steps. Thus, embodiments can be directed to computer systems configured to perform the steps of any of the methods described herein, potentially with different components performing a respective steps or a respective group of steps. Although presented as numbered steps, steps of methods herein can be performed at a same time or in a different order. Additionally, portions of these steps may be used with portions of other steps from other methods. Also, all or portions of a step may be optional. Additionally, any of the steps of any of the methods can be performed with modules, circuits, or other means for performing these steps.
The specific details of particular embodiments may be combined in any suitable manner without departing from the spirit and scope of embodiments of the invention. However, other embodiments of the invention may be directed to specific embodiments relating to each individual aspect, or specific combinations of these individual aspects.
The above description of exemplary embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form described, and many modifications and variations are possible in light of the teaching above. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated.
A recitation of “a”, “an” or “the” is intended to mean “one or more” unless specifically indicated to the contrary.
All patents, patent applications, publications, and descriptions mentioned here are incorporated by reference in their entirety for all purposes. None is admitted to be prior art.
Claims
1. A method for performing animation, the method comprising performing, by a computer system:
- receiving a polygonal mesh defining a surface of an object;
- generating contour lines on the surface of the object according to one or more criteria;
- identifying one or more points of the contour lines that are occluded from a virtual camera;
- removing the one or more points from the contour lines, thereby generating visible lines;
- generating contour ribbons by extending the visible lines in a first direction relative to the virtual camera;
- extruding faces of the contour ribbons in a second direction relative to the virtual camera, thereby generating a first three-dimensional volume;
- generating a texture for the surface of the object, the texture defining a second three-dimensional volume that has a density gradient; and
- rendering, using a virtual light source, the object by ray tracing between the virtual light source and the first three-dimensional volume and by ray tracing between the virtual light source and the second three-dimensional volume.
2. The method of claim 1, wherein the contour lines are generated based on a location of the virtual camera relative to the object.
3. The method of claim 1, wherein identifying a first point as occluded includes:
- shifting the first point from the surface by a specified distance;
- tracing a ray from the first point to the virtual camera; and
- identifying the first point as occluded when the ray hits the surface of the object.
4. The method of claim 1, wherein removing the one or more points from the contour lines generates contour fragments, the method further comprising:
- projecting the contour fragments onto a focal plane of the virtual camera; and
- merging any contour fragments that are within a specified distance of each other.
5. The method of claim 1, wherein the first direction is lateral relative to the virtual camera, and wherein the second direction defines a depth relative to the virtual camera.
6. The method of claim 1, further comprising tapering ends of the contour ribbons.
7. The method of claim 1, wherein generating the contour lines includes one or more of:
- identifying points on the surface of the object that have a surface normal perpendicular to a viewing vector; and
- identifying points of self-intersection of the surface of the object.
8. The method of claim 1, wherein:
- the second three-dimensional volume is fuzzy; and
- the first three-dimensional volume defines a shape of the second three-dimensional volume.
9. The method of claim 1, wherein identifying the one or more points of the contour lines as occluded includes identifying configured surface attributes.
10. The method of claim 1, wherein the object is a character and wherein the first three-dimensional volume defines body parts of the character.
11. A system for performing animation, the system comprising one or more processors configured to:
- receive a polygonal mesh defining a surface of an object;
- generate contour lines on the surface of the object according to one or more criteria;
- identify one or more points of the contour lines that are occluded from a virtual camera;
- remove the one or more points from the contour lines, thereby generating visible lines;
- generate contour ribbons by extending the visible lines in a first direction relative to the virtual camera;
- extrude faces of the contour ribbons in a second direction relative to the virtual camera, thereby generating a first three-dimensional volume;
- generate a texture for the surface of the object, the texture defining a second three-dimensional volume that has a density gradient; and
- render, using a virtual light source, the object by ray tracing between the virtual light source and the first three-dimensional volume and by ray tracing between the virtual light source and the second three-dimensional volume.
12. The system of claim 11, wherein the contour lines are generated based on a location of the virtual camera relative to the object.
13. The system of claim 11, wherein identifying a first point as occluded includes:
- shifting the first point from the surface by a specified distance;
- tracing a ray from the first point to the virtual camera; and
- identifying the first point as occluded when the ray hits the surface of the object.
14. The system of claim 11, wherein removing the one or more points from the contour lines generates contour fragments, the processors further configured to:
- project the contour fragments onto a focal plane of the virtual camera; and
- merge any contour fragments that are within a specified distance of each other.
15. The system of claim 11, wherein the first direction is lateral relative to the virtual camera, and wherein the second direction defines a depth relative to the virtual camera.
16. The system of claim 11, wherein the processors are further configured to taper ends of the contour ribbons.
17. The system of claim 11, wherein generating the contour lines includes one or more of:
- identifying points on the surface of the object that have a surface normal perpendicular to a viewing vector; and
- identifying points of self-intersection of the surface of the object.
18. The system of claim 11, wherein:
- the second three-dimensional volume is fuzzy; and
- the first three-dimensional volume defines a shape of the second three-dimensional volume.
19. The system of claim 11, wherein identifying the one or more points of the contour lines as occluded includes identifying configured surface attributes.
20. The system of claim 11, wherein the object is a character and wherein the first three-dimensional volume defines body parts of the character.
20030222875 | December 4, 2003 | Kawakami |
20090153578 | June 18, 2009 | Kim |
20100194863 | August 5, 2010 | Lopes |
20120147008 | June 14, 2012 | Lin |
20160257077 | September 8, 2016 | Brown |
20170132836 | May 11, 2017 | Iverson |
20190325646 | October 24, 2019 | Wong |
20220366653 | November 17, 2022 | Sung |
- Benard et al., “Computing Smooth Surface Contours with Accurate Topology”, ACM Transactions on Graphics, vol. 33, No. 3, Mar. 2014, pp. 1-20.
- Bénard et al., “Line Drawings From 3D Models: A Tutorial”, Available Online at: https://arxiv.org/pdf/1810.01175.pdf, May 13, 2019, pp. 1-126.
- Catmull et al., “Recursively Generated B-Spline Surfaces on Arbitrary Topological Meshes”, Seminal graphics: pioneering efforts that shaped the field, vol. 10, No. 6, Jul. 1978, pp. 183-188.
- Coleman et al., “Making Souls: Methods and a Pipeline for Volumetric Characters”, SIGGRAPH '20: ACM SIGGRAPH 2020 Talks, Article No. 28, Aug. 17, 2020, 2 pages.
- Fong et al., “Production Volume Rendering”, SIGGRAPH '17: ACM SIGGRAPH 2017 Courses, Jul. 2017, 97 pages.
- Halstead et al., “Efficient, Fair Interpolation using Catmull-Clark Surfaces”, SIGGRAPH '93: Proceedings of the 20th annual conference on Computer graphics and interactive techniques,, Sep. 1993, 11 pages.
- Pekerman et al., “Self-Intersection Detection and Elimination in Freeform Curves and Surfaces”, Computer-Aided Design, vol. 40, No. 2, Feb. 2008, pp. 150-159.
Type: Grant
Filed: Oct 27, 2021
Date of Patent: Aug 15, 2023
Patent Publication Number: 20230125292
Assignee: PIXAR (Emeryville, CA)
Inventors: Fernando Ferrari de Goes (Kensington, CA), Junyi Ling (San Francisco, CA), George Binh Hiep Nguyen (Oakland, CA), Markus Heinz Kranzler (Alameda, CA)
Primary Examiner: Phuc N Doan
Application Number: 17/512,378
International Classification: G06T 13/20 (20110101); G06T 17/20 (20060101); G06T 15/08 (20110101); G06T 15/04 (20110101); G06T 15/06 (20110101);