MESH TRANSFER FOR SHAPE BLENDING

- Pixar

Techniques are disclose that may assist animators or other artists working with models. Information from a plurality of meshes in a collection may be blended or combined using correspondences between pairs of the meshes. Meshes in the collection may include different topologies and geometries. The combined information can be used to create combinations of data that reflect new topologies, geometries, scalar fields, hair styles, or the like that may be transferred to a mesh of new or existing models.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCES TO RELATED APPLICATIONS

This application claims priority to and the benefit of U.S. Provisional Patent Application No. 61/030,796, filed Feb. 22, 2008 and entitled “Transfer of Rigs with Temporal Coherence,” the entire disclose of which is incorporated herein by reference for all purposes.

This application may be related to the following commonly assigned applications:

U.S. patent application Ser. No. ______ (Atty. Dkt. No. 021751-018800US, filed ______ and entitled “Mesh Transfer.”

U.S. patent application Ser. No. ______ (Atty. Dkt. No. 021751-018900US, filed ______ and entitled “Mesh Transfer Using UV-Space.”

U.S. patent application Ser. No. ______ (Atty. Dkt. No. 021751-019000US, filed ______ and entitled “Mesh Transfer in N-D Space.”

The respective disclosures of these applications are incorporated herein by reference in their entirety for all purposes.

BACKGROUND

This disclosure relates to computer animation and computer generated imagery. More specifically, this disclosure related to techniques for sharing shape information between computer models.

With the wide-spread availability of computers, animators and computer graphics artists can rely upon computers to assist in the animation and computer generated imagery process. This may include using computers to have physical models be represented by virtual models in computer memory. This may also include using computers to facilitate animation, for example, by the designing, posing, deforming, coloring, painting, or the like, of characters or other elements of a computer animation display.

Pioneering companies in the computer-aided animation/computer generated imagery (CGI) industry can include Pixar. Pixar is more widely known as Pixar Animation Studios, the creators of animated features such as “Toy Story” (1995) and “Toy Story 2” (1999), “A Bugs Life” (1998), “Monsters, Inc.” (2001), “Finding Nemo” (2003), “The Incredibles” (2004), “Cars” (2006), “Ratatouille” (2007), and others. In addition to creating animated features, Pixar develops computing platforms specially designed for computer animation and CGI, now known as RenderMan®. RenderMan® is now widely used in the film industry and the inventors have been recognized for their contributions to RenderMan® with multiple Academy Awards®.

One core functional aspect of RenderMan® software can include the use of a “rendering engine” to convert geometric and/or mathematical descriptions of objects or other models into images. This process is known in the industry as “rendering.” For movies or other features, a user (e.g., an animator or other skilled artist) specifies the geometric description of a model or other objects, such as characters, props, background, or the like that may be rendered into images. An animator may also specifying poses and motions for objects or portions of the objects. In some instances, the geometric description of objects may include a number of animation variables (avars), and values for the avars.

The production of animated features and CGI may involve the extensive use of computer graphics techniques to produce a visually appealing image from the geometric description of an object or model that can be used to convey an element of a story. One of the challenges in creating models for use in animated features can be balancing the desire for a visually appealing image of a character or other object with the practical issues involved in allocating the computational resources required to produce those visually appealing images. Often the geometric descriptions of objects or models at various stages in a feature film production environment may be rough and course, lacking the realism and detail that would be expected of the final production.

One issue with the production process is the time and effort involved when an animator undertakes to create the geometric description of a model and the models associated avars, rigging, shader variables, paint data, or the like. Even with models that lack the detail and realism expected of the final production, it may take several hours to several days for an animator to design, rig, pose, paint, or otherwise prepare the model for a given state of the production process. Further, although the model need not be fully realistic at all stages of the production process, it can be desirable that the animator or artist producing the model be able to modify certain attributes of the model at any stage. However, modifying the model during the production process may also involved significant time and effort. Often, there may not be sufficient time for desired modifications in order to maintain a release schedule.

Accordingly, what is desired is to solve problems relating to transferring information between meshes, some of which may be discussed herein. Additionally, what is desired is to reduce drawbacks related to transferring information between meshes, some of which may be discussed herein.

SUMMARY

In various embodiments, data and other information of models can be shared to be combined to create new models or update features of existing models. A correspondence between pairs of meshes in a collection of meshes can be created. The correspondences may enable an animator or artist to share, blend, or combine information from a plurality of meshes. Mesh information and other data at, near, or otherwise associated with the models can be “pushed through” the correspondences and combined or blended with information from other models.

The correspondence between each pairs of the models can enable animators and other digital artists to create new characters from existing characters that may have different topologies and geometries. Additionally, the correspondence may be created between different versions of the same character, thereby allowing the animator to implement changes to characters at later stages of the production process and transfer information from prior versions thereby preserving previous work product and reducing the time and cost of updating the characters.

In some embodiments, a correspondence for sharing or transferring information between models can be generated based on a pair of “feature curve networks.” A correspondence can be generated using one or more geometric primitives (e.g., points, lines, curves, volumes, etc.) associated with a source surface, such as a portion of a source mesh, and corresponding geometric primitives associated with a destination surface. For example, a collection “feature curves” may be created to partition the source and destination surfaces into a collection of “feature regions” at “features” or other prominent aspects of a model. The resulting collections of partitions or “feature curve networks” can be used to construct a full surface correspondence between all points of the source mesh and all points of the destination mesh.

The information sharing between two or more meshes may unidirectional or bidirectional based on the correspondences. Thereby, information may be shared between two or more meshes, such as scalar fields, variables, controls, avars, articulation data, character rigging, shader data, lighting data, paint data, simulation data, topology and/or geometry, re-meshing information, map information, or the like.

In various embodiments, difference information between a plurality of meshes may be determined based on the correspondence. The difference information may be stored. For example, the difference information may generated and stored as a bump map. Alternatively, the difference between a set of meshes may be determined and information indicative of the difference may be generated and stored as a set of wavelet coefficients.

A further understanding of the nature, advantages, and improvements offered by those inventions disclosed herein may be realized by reference to remaining portions of this disclosure and any accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

In order to better describe and illustrate embodiments and/or examples of any inventions presented within this disclosure, reference may be made to one or more accompanying drawings. The additional details or examples used to describe the accompanying drawings should not be considered as limitations to the scope of any of the disclosed inventions, any of the presently described embodiments and/or examples, or the presently understood best mode of any invention presented within this disclosure.

FIG. 1 is a simplified block diagram of a system for creating computer animations and computer graphics imagery that may implement or incorporate various embodiments of an invention whose teachings may be presented herein;

FIG. 2 is an illustration of a mesh for a head of a human character;

FIG. 3A is an illustration a mesh including various pieces of associated information;

FIG. 3B is an illustration of a mesh in various embodiments with which information associated with the mesh of FIG. 3A may be shared;

FIG. 4 is a simplified flowchart of a method in various embodiments for shape blending;

FIG. 5 is a block diagram of a collection of meshes in one embodiment;

FIG. 6 is a block diagram illustrating blending of topology information and geometry information in one embodiment;

FIGS. 7A, 7B, and 7C illustrate a collection of meshes and a resultant blend in one embodiment; and

FIG. 8 is a block diagram of a block diagram of a computer system or information processing device that may be used to implement or practice various embodiments of an invention whose teachings may be presented herein.

DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS

Techniques and tools can be implemented that assist in the production of computer animation and computer graphics imagery. A mesh can be the structure that gives shape to a model. The mesh of a model may include, in addition to information specifying vertices and edges, various additional pieces of information. In various embodiments, point weight groups, shader variables, articulation controls, hair variables and styles, paint data, or the like, can be shared between meshes having different topologies and geometries. Information associated with a plurality of meshes can be blended for sharing with or transferring to the mesh of another character, even from characters with completely different topologies.

FIG. 1 is a simplified block diagram of system 100 for creating computer animations and computer graphics imagery that may implement or incorporate various embodiments of an invention whose teachings may be presented herein. In this example, system 100 includes design computer 110, object library 120, object modeler 130, object simulator 140, and object render 150.

Design computer 110 can be any PC, laptop, workstation, mainframe, cluster, or the like. Object library 120 can be any database configured to store information related to objects that may be designed, posed, animated, simulated, rendered, or the like.

Object modeler 130 can be any hardware and/or software configured to model objects. Object modeler 130 may generate 2-D and 3-D object data to be stored in object library 120. Object simulator 140 can be any hardware and/or software configured to simulate objects. Object simulator 140 may generate simulation data using physically-based numerical techniques. Object renderer 150 can be any hardware and/or software configured to render objects. For example, object renderer 150 may generate still images, animations, motion picture sequences, or the like of objects stored in object library 120.

FIG. 2 is an illustration of mesh 200 for a head of a human character model in one embodiment. Mesh 200 can be created or modeled as a collection of faces (e.g., triangles, quadrilaterals, or other polygons), formed by interconnecting a collection of vertices. In this example, a collection of polygons interconnect at vertex 210. Polygons may interconnect at vertex 210 to share an edge (e.g., edge 220). Any number of polygons and vertices may be used to form mesh 200. The number of polygons may be dependent on user preference, the desired topology, geometry, realism, detail, or the like.

Motion of a model associated with mesh 200 may be realized by controlling mesh 200, for example by controlling vertices 230, 240, and 250. Polygons and vertices of mesh 200 may be individually animated by moving their location in space (x, y, z) for each displayed frame of a computer animation. Polygons and vertices of mesh 200 may also move together as group, maintaining constant relative position. Thus, for example, by raising vertices of mesh 200 by appropriate amounts at the corners of lips on the head of the human character, a smiling expression can be formed. Similarly, vertices of mesh 200 located at or near features or other prominent aspects of the model created by mesh 200, such as eyebrows, cheeks, forehead, etc. may be moved to deform the head of the human character to form a variety of expressions.

In addition to controlling character deformations, information can be “attached to” mesh 200 to provide other functional and/or decorative purposes. For example, mesh 200 may be connected to skeletons, character rigging, or other animations controls and avars used to animate, manipulate, or deform the model via mesh 200. Further, fields of data and/or variables specifying color, shading, paint, texture, etc. can be located at certain vertices or defined over surfaces of mesh 200. As discussed above, constructing mesh 200 and placing all of this information on mesh 200 can be a time consuming process. This process may limit how many characters or other objects may be created, the topologies and geometries of those models, and what changes can be made during various stages in the production of animations, such as feature-length films.

FIG. 3A is an illustration mesh 310 including various pieces of associated information. Mesh 310 can include scalar field 320, animations controls 330, topology/geometry data 340, and painter data 350. Scalar field 320 may include a distribution of values or variables over a portion of mesh 310. The values or variables associated with scalar field 320 may include shader variables, point weight groups, the location of hair/fur objects, or the like. Topology/geometry data 340 may include information that defines or describes a locality in terms of its layout, structure, or level of detail. Painter data 350 may include information, such as coloring and textures, placed by an animator or designer at a specific location on mesh 310.

In various embodiments, new models can be created and existing models can be more readily updated using techniques of this disclosure that allow animators to overcome some of the timing constraints involved in creating models. Additionally, the time and effort put into designing one model can be preserved allowing the prior work and effort performed by the animator to be shared with or copied to another model. In some embodiments, a correspondence can be created that allows information present at or on a mesh to be shared with another mesh. The correspondence can reduce the time required to create new models, or the update existing models at later stages of the production process. Thus, animation controls, rigging, shader and paint data, etc. can be authored once on a character, and shared or transferred to different version of the same character or to another character of completely different topology and geometry.

In the example of FIG. 3A, mesh 310 may represent an initial or preliminary version of a character. For example, mesh 310 may include a number of polygons that provide a character with just enough detail with which an animator, designer, or other graphics artist may work. The number of polygons may be relatively small compared to the number of polygons for a final or production version of the character having lifelike or the final desired detail and/or realism. The relatively small size of mesh 310 may allow the character associated with mesh 310 to be quickly posed, animated, rigged, painted, or rendered in real-time, allowing an animator to see quick results early in production process.

Referring to FIG. 3B, mesh 360 may represent a production or final version of the character. Mesh 360 may include a relatively higher or larger number of polygons with respect to initial or preliminary versions of the character to provide more realistic detail in each rendered frame. In this example, mesh 360 can include scalar field 370. Scalar field 370 may be identical to, similar to, or otherwise include some relationship with scalar field 320. For example, both may represent how the head of the character is to be shaded or how hair is to be placed.

In various embodiments, one or more correspondences may be created that allow information associated with mesh 310 to be readily shared with or transferred to mesh 360. Scalar field 320, animations controls 330, topology/geometry data 340, and/or painter data 350 can be “pushed” through a correspondence between mesh 310 and mesh 360. For example, scalar field 320 can be transferred to mesh 360 to create scalar field 370. Thus, once correspondences are created between meshes, any information at or on one mesh may be shared with another mesh. This can allow sharing of information even if one mesh includes differing topologies and geometries from other meshes.

In further embodiments, correspondences may be created between pairs of meshes in a collection of meshes. Information associated with mesh 310 can be blended with information associated with mesh 360. For example, scalar field 320, animations controls 330, topology/geometry data 340, and/or painter data 350 can be “pushed” through the correspondence between mesh 310 and mesh 360 and blended or otherwise combined to create new data. The new data may be pushed back through the correspondence and used to update an existing mesh or pushed through another correspondence to create a variety of new characters. This can allow the combination and blending of information even if one mesh includes differing topologies and geometries from other meshes in the collection.

FIG. 4 is a simplified flowchart of method 400 in various embodiments for shape blending. The processing depicted in FIG. 4 may be performed by software modules (e.g., instructions or code) executed by a processor of a computer system, by hardware modules of an electronic device, or combinations thereof. FIG. 4 begins in step 410.

In step 420, a collection of meshes is received. The collection may include one or more meshes or references to a set of meshes. The collection may include meshes for models having identical, similar, or different topologies, geometries, or the like.

In step 430, correspondences between pairs of meshes are generated. Each correspondence between a pair of meshes can include functions, relationships, correlations, etc. between one or more points associated with a first mesh and one or more points associated with a second mesh. The correspondence may include a mapping from every location on or within a space near the first mesh to a unique location on or near the second mesh. The correspondence may map one or more points, curves, surfaces, regions, objects, or the like, associated with the first object to one or more points, curves, surfaces, regions, objects, or the like associated with the second object. The correspondence may include a surface correspondence and/or a volume correspondence.

In various embodiments, a parameterization is built for the pairs meshes over a common domain. This common parameter domain can then be used to build a global and continuous correspondence between all points of the source and destination surfaces. The basic framework of the parameterization may rely on user-supplied points, user-supplied curves, inferred discontinuities, or the like. In some embodiments, the parameterization may include a set of feature curves defining a feature curve network.

FIG. 5 is a block diagram of collection 500 of meshes in one embodiment. In this example, collection 500 can include meshes 510, 520, and 530. Meshes 510 and 520 may have an identical or substantially similar topology (as indicated by the common rectangular shape). Mesh 530 may have a different topology than meshes 510 and 520 (as indicated by a circular shape).

A correspondence is generated for each pair in collection. For example, correspondence 540 may be created between meshes 510 and 520, correspondence 550 may be created between meshes 510 and 530, and correspondence 560 may be created between meshes 520 and 530. The correspondences may be created using features curve networks, in which one or more feature curves may be user authored or automatically determined in response to parameterization information associated with a mesh. The correspondences may include one or more surfaces correspondences and/or one or more volume correspondences.

Returning to FIG. 4, in step 440, information associated with a plurality of meshes is combined based on the correspondences. For examples, information associated with meshes 510 and 520 in collection 500 may be combined based on correspondence 540. In various embodiments, information of type A from mesh 510 may be combined with information of type A from mesh 520 to create blended information of type A. The information from mesh 510 may be summed, averages, or otherwise procedurally process with the information from mesh 520 to generate like information of type A. In further embodiments, information of type A from mesh 510 may be combined with information of type B from mesh 520 to create blended information of a set of type A and B.

Accordingly, correspondences may be created between pairs of meshes in a collection of meshes. Information associated with a plurality of meshes can be “pushed” through the correspondences and blended or otherwise combined to create combinations of data that reflect new topologies, geometries, scalar fields, hair styles, or the like that may be transferred to a mesh of new or existing models. Thus, information can be shared, combined, and blended between meshes that may include differing topologies and geometries from other meshes in a collection. FIG. 4 ends in step 450.

FIG. 6 is a block diagram illustrating blending of topology information and geometry information in one embodiment. In this example, topology information 610 from mesh 510 of FIG. 5 is pushed through correspondence 540 with mesh 520. Geometry information 620 from mesh 530 is pushed through correspondence 560 with mesh 520.

Blending function 630 receives topology information 610 and geometry information 620 for application to mesh 520. Since correspondences 540 and 560 provide full correspondences between all points between meshes 510 and 520, and between meshes 530 and 520, respectively, lending function 630 can apply blended or combined information to corresponding points on mesh 520. Blending function 630 may include one or more values, parameters, attributes, or the like for controlling the weighting, scaling, or transformation of the blending or transfer of common types or different types of information from other meshes.

FIGS. 7A, 7B, and 7C illustrate a collection of meshes and a resultant blend in one embodiment. Referring to FIG. 7A, human character 705 can be represented using mesh 710. Mesh 710 may include a first topology and provide the geometry to character 705. For example, character 705 may appear to be tall and thin. Mesh 710 may include a feature curve network 715. Feature curve network 715 may include a set of feature curves (e.g., black lines with in-line arrows) that partition mesh 715 into a collection of feature regions.

Referring to FIG. 7B, human character 720 can be represented using mesh 725. Mesh 725 may include a second topology (i.e., a topology different from the first topology of character 705) and provide the geometry of character 720. For example, character 720 may appear to be stocky and over-weight. Mesh 725 may include a feature curve network 730. Feature curve network 730 may include a set of feature curves that partition mesh 725 into a collection of feature regions.

In FIG. 7C, human character 740 may be created using a blend of information from characters 705 and 720. Character 740 may be represented by mesh 745. In one example, a correspondence may be generated between mesh 705 and mesh 745 using feature curve network 715 and a corresponding feature curve network placed on mesh 745. Another correspondence may be generated between mesh 720 and mesh 745 using feature curve network 730 and a corresponding feature curve network placed on mesh 745. The same feature curve network placed on mesh 745 may be used for created the correspondences. Alternatively, different feature curve networks may be used.

Using one or more correspondences between mesh 705 and mesh 745, the first topology of character 705 may be transferred to mesh 745 of character 740. The first topology information of character 705 may be blended with geometry information transferred from character 720 using one or more correspondences to create character 740. For example, a user or animator may use a correspondence to blend the first topology of character 705 with 60% of the geometry of character 720 to create character 740.

In various embodiments, accordingly, information from a plurality of meshes in a collection may be blended or combined using correspondences between pairs of the meshes. The combined information can be used to create combinations of data that reflect new topologies, geometries, scalar fields, hair styles, or the like that may be transferred to a mesh of new or existing models. Thus, information can be shared, combined, and blended between meshes that may include differing topologies and geometries from other meshes in a collection.

FIG. 8 is a block diagram of computer system 800 that may be used to implement or practice various embodiments of an invention whose teachings may be presented herein. FIG. 8 is merely illustrative of a general-purpose computer system or specific information processing device for an embodiment incorporating an invention whose teachings may be presented herein and does not limit the scope of the invention as recited in the claims. One of ordinary skill in the art would recognize other variations, modifications, and alternatives.

In one embodiment, computer system 800 can include monitor 810, computer 820, keyboard 830, user input device 840, computer interfaces 850, or the like. Monitor 810 may typically include familiar display devices, such as a television monitor, a cathode ray tube (CRT), a liquid crystal display (LCD), or the like. Monitor 810 may provide an interface to user input device 840, such as incorporating touch screen technologies.

Computer 820 may typically include familiar computer components, such as processor 860 and one or more memories or storage devices, such as random access memory (RAM) 870, one or more disk drives 880, graphics processing unit (GPU) 885, or the like. Computer 820 may include system bus 890 interconnecting the above components and providing functionality, such as inter-device communication.

In further embodiments, computer 820 may include one or more microprocessors (e.g., single core and multi-core) or micro-controllers, such as PENTIUM, ITANIUM, or CORE 2 processors from Intel of Santa Clara, Calif. and ATHLON, ATHLON XP, and OPTERON processors from Advanced Micro Devices of Sunnyvale, Calif. Further, computer 820 may include one or more hypervisors or operating systems, such as WINDOWS, WINDOWS NT, WINDOWS XP, VISTA, or the like from Microsoft or Redmond, Wash., SOLARIS from Sun Microsystems, LINUX, UNIX, and UNIX-based operating system.

In various embodiments, user input device 840 may typically be embodied as a computer mouse, a trackball, a track pad, a joystick, a wireless remote, a drawing tablet, a voice command system, an eye tracking system, or the like. User input device 840 may allow a user of computer system 800 to select objects, icons, text, user interface widgets, or other user interface elements that appear on monitor 810 via a command, such as a click of a button or the like.

In some embodiments, computer interfaces 850 may typically include a communications interface, an Ethernet card, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber line (DSL) unit, FireWire interface, USB interface, or the like. For example, computer interfaces 850 may be coupled to a computer network, to a FireWire bus, a USB hub, or the like. In other embodiments, computer interfaces 850 may be physically integrated as hardware on the motherboard of computer 820, may be implemented as a software program, such as soft DSL or the like, or may be implemented as a combination thereof.

In various embodiments, computer system 800 may also include software that enables communications over a network, such as the Internet, using one or more communications protocols, such as the HTTP, TCP/IP, RTP/RTSP protocols, or the like. In some embodiments, other communications software and/or transfer protocols may also be used, for example IPX, UDP or the like, for communicating with hosts over the network or with a device directly connected to computer system 800.

RAM 870 and disk drive 880 are examples of machine-readable articles or computer-readable media configured to store information, such as computer programs, executable computer code, human-readable source code, shader code, rendering enginges, or the like, and data, such as image files, models including geometrical descriptions of objects, ordered geometric descriptions of objects, procedural descriptions of models, scene descriptor files, or the like. Other types of computer-readable storage media or tangible machine-accessible media include floppy disks, removable hard disks, optical storage media such as CD-ROMS, DVDs and bar codes, semiconductor memories such as flash memories, read-only-memories (ROMS), battery-backed volatile memories, networked storage devices, or the like.

In some embodiments, GPU 885 may include any conventional graphics processing unit. GPU 885 may include one or more vector or parallel processing units that may be user programmable. Such GPUs may be commercially available from NVIDIA, ATI, and other vendors. In this example, GPU 885 can include one or more graphics processors 893, a number of memories and/or registers 895, and a number of frame buffers 897.

As suggested, FIG. 8 is merely representative of a general-purpose computer system or specific data processing device capable of implementing or incorporating various embodiments of an invention presented within this disclosure. Many other hardware and/or software configurations may be apparent to the skilled artisan which are suitable for use in implementing an invention presented within this disclosure or with various embodiments of an invention presented within this disclosure. For example, a computer system or data processing device may include desktop, portable, rack-mounted, or tablet configurations. Additionally, a computer system or information processing device may include a series of networked computers or clusters/grids of parallel processing devices. In still other embodiments, a computer system or information processing device may techniques described above as implemented upon a chip or an auxiliary processing board.

Various embodiments of any of one or more inventions whose teachings may be presented within this disclosure can be implemented in the form of logic in software, firmware, hardware, or a combination thereof. The logic may be stored in or on a machine-accessible memory, a machine-readable article, a tangible computer-readable medium, a computer-readable storage medium, or other computer/machine-readable media as a set of instructions adapted to direct a central processing unit (CPU or processor) of a logic machine to perform a set of steps that may be disclosed in various embodiments of an invention presented within this disclosure. The logic may form part of a software program or computer program product as code modules become operational with a processor of a computer system or an information-processing device when executed to perform a method or process in various embodiments of an invention presented within this disclosure. Based on this disclosure and the teachings provided herein, a person of ordinary skill in the art will appreciate other ways, variations, modifications, alternatives, and/or methods for implementing in software, firmware, hardware, or combinations thereof any of the disclosed operations or functionalities of various embodiments of one or more of the presented inventions.

The disclosed examples, implementations, and various embodiments of any one of those inventions whose teachings may be presented within this disclosure are merely illustrative to convey with reasonable clarity to those skilled in the art the teachings of this disclosure. As these implementations and embodiments may be described with reference to exemplary illustrations or specific figures, various modifications or adaptations of the methods and/or specific structures described can become apparent to those skilled in the art. All such modifications, adaptations, or variations that rely upon this disclosure and these teachings found herein, and through which the teachings have advanced the art, are to be considered within the scope of the one or more inventions whose teachings may be presented within this disclosure. Hence, the present descriptions and drawings should not be considered in a limiting sense, as it is understood that an invention presented within a disclosure is in no way limited to those embodiments specifically illustrated.

Accordingly, the above description and any accompanying drawings, illustrations, and figures are intended to be illustrative but not restrictive. The scope of any invention presented within this disclosure should, therefore, be determined not with simple reference to the above description and those embodiments shown in the figures, but instead should be determined with reference to the pending claims along with their full scope or equivalents.

Claims

1. A computer-implemented method for generating correspondences for transferring information between collections of objects, the method comprising:

receiving a collection of meshes, the collection of meshes having at least 2 topologies;
generating a correspondence between all pairs in the collection of meshes; and
combining information associated with a plurality of meshes in the collection of meshes based on the correspondence.

2. The method of claim 1 wherein combining the information associated with the plurality of meshes in the collection of meshes comprises combining shape associated with two or more meshes in the collection of meshes.

3. The method of claim 1 wherein combining the information associated with the plurality of meshes in the collection of meshes comprises combining geometry associated with two or more meshes in the collection of meshes.

4. The method of claim 1 further comprising:

generating an output mesh based on the combined information.

5. The method of claim 1 wherein generating the correspondence between all pairs in the collection of meshes comprises generating the correspondence between each mesh in the collection of meshes and an output mesh in the collection of meshes.

6. The method of claim 1 wherein generating the correspondence between all pairs in the collection of meshes comprises generating the correspondence based on one or more harmonic functions.

7. The method of claim 1 wherein generating the correspondence between all pairs in the collection of meshes comprises generating the correspondence based on a set of feature curve networks associated with the meshes.

8. The method of claim 7 wherein the set of feature curve networks comprise at least one feature curve that is defined by at least one point that lies in the interior of a face associated with one of the meshes in the collection of meshes.

9. A computer readable medium configured to store a set of code modules which when executed by a processor of a computer system become operational with the processor generating correspondences for transferring information between collections of objects, the computer readable medium comprising:

code for receiving a collection of meshes, the collection of meshes having at least 2 topologies;
code for generating a correspondence between all pairs in the collection of meshes; and
code for combining information associated with a plurality of meshes in the collection of meshes based on the correspondence.

10. The computer readable medium of claim 9 wherein the code for combining the information associated with the plurality of meshes in the collection of meshes comprises code for combining shape associated with two or more meshes in the collection of meshes.

11. The computer readable medium of claim 9 wherein the code for combining the information associated with the plurality of meshes in the collection of meshes comprises code for combining geometry associated with two or more meshes in the collection of meshes.

12. The computer readable medium of claim 9 further comprising:

code for generating an output mesh based on the combined information.

13. The computer readable medium of claim 9 wherein the code for generating the correspondence between all pairs in the collection of meshes comprises code for generating the correspondence between each mesh in the collection of meshes and an output mesh in the collection of meshes.

14. The computer readable medium of claim 9 wherein the code for generating the correspondence between all pairs in the collection of meshes comprises code for generating the correspondence based on one or more harmonic functions.

15. The computer readable medium of claim 9 wherein the code for generating the correspondence between all pairs in the collection of meshes comprises code for generating the correspondence based on a set of feature curve networks associated with the meshes.

16. The computer readable medium of claim 15 further comprising:

code for generating a feature curve network in the set of feature curve networks in response to at least one feature curve that is defined by at least one point that lies in the interior of a face associated with one of the meshes in the collection of meshes.

17. A system for generating correspondences for transferring information between collections of objects, the system comprising:

a processor; and
a memory coupled to the processor, the memory configured to store a set of instructions which when executed by the processor become operational with the processor to: receive a collection of meshes, the collection of meshes having at least two topologies; generate a correspondence between each mesh in the collection of meshes and an output mesh; and combine information associated with a plurality of meshes in the collection of meshes based on the correspondence; and generate the output mesh based on the combined information.

18. The system of claim 17 wherein the instructions become operational with the processor to combine shape associated with two or more meshes in the collection of meshes to generate the output mesh.

19. The system of claim 17 wherein the instructions become operational with the processor to combine geometry associated with two or more meshes in the collection of meshes to generate the output mesh.

20. The system of claim 17 wherein the instructions become operational with the processor to generate the correspondence based on one or more harmonic functions.

21. The method of claim 1 wherein generating the correspondence between all pairs in the collection of meshes comprises generating the correspondence based on a set of feature curve networks associated with the meshes.

22. The method of claim 7 wherein the set of feature curve networks comprise at least one feature curve that is defined by at least one point that lies in the interior of a face associated with one of the meshes in the collection of meshes.

23. A method detailing differences between objects, the method comprising:

receiving a collection of meshes, the collection of meshes having at least 2 topologies;
generating a correspondence between all pairs in the collection of meshes; and
combining information associated with a plurality of meshes in the collection of meshes based on the correspondence;
determining difference information based on the correspondence; and
storing the difference information.

24. The method of claim 23 wherein storing the difference information comprising storing the difference information as a bump map.

25. The method of claim 23 wherein storing the difference information comprising storing the difference information as a set of wavelet coefficients.

Patent History
Publication number: 20090213138
Type: Application
Filed: Aug 28, 2008
Publication Date: Aug 27, 2009
Applicant: Pixar (Emeryville, CA)
Inventors: Tony DeRose (San Rafael, CA), Mark Meyer (San Francisco, CA), Sanjay Bakshi (Oakland, CA), Brian Green (Walnut Creek, CA)
Application Number: 12/200,739
Classifications
Current U.S. Class: Graphic Manipulation (object Processing Or Display Attributes) (345/619)
International Classification: G09G 5/00 (20060101);