DATA-DRIVEN REPRESENTATION AND CLUSTERING DISCRETIZATION METHOD AND SYSTEM FOR DESIGN OPTIMIZATION AND/OR PERFORMANCE PREDICTION OF MATERIAL SYSTEMS AND APPLICATIONS OF SAME
A method and system for design optimization and/or performance prediction of a material system includes generating a representation of the material system at a number of scales, the representation at a scale comprising microstructure volume elements (MVE) of building blocks of the material system at said scale; providing inputs to the MVEs; collecting data of response fields of the MVE computed from a material model of the material system over a predefined set of material properties and boundary conditions; applying machine learning to the collected data to generate clusters; computing an interaction tensor of interactions of each cluster with each of the other clusters; and solving an governing partial differential equation using the generated clusters and the computed interactions to result in a response prediction usable in an iterative scheme in a multiscale model for the material system. The performance of each scale can be predicted for design optimization.
This application claims priority to and the benefit of, pursuant to 35 U.S.C. § 119(e), of U.S. provisional patent application Ser. No. 62/731,381, filed Sep. 14, 2018, entitled “MULTISCALE MODELING PLATFORM AND APPLICATIONS OF SAME”, by Wing Kam Liu, Jiaying Gao, Cheng Yu and Orion L. Kafka, which is incorporated herein by reference in its entirety.
This application is related to a co-pending U.S. patent application, entitled “INTEGRATED PROCESS-STRUCTURE-PROPERTY MODELING FRAMEWORKS AND METHODS FOR DESIGN OPTIMIZATION AND/OR PERFORMANCE PREDICTION OF MATERIAL SYSTEMS AND APPLICATIONS OF SAME”, by Wing Kam Liu, Jiaying Gao, Cheng Yu, and Orion L. Kafka, with Attorney Docket No. 0116936.152US2, filed on the same day that this application is filed, and with the same assignee as that of this application, which is incorporated herein by reference in its entirety.
FIELD OF THE INVENTIONThe invention relates generally to materials, and more particularly, to a specific method and system to aggregate dissimilar material geometry, properties, and interactions to predict combined properties and performance and applications of the same.
BACKGROUND OF THE INVENTIONThe background description provided herein is for the purpose of generally presenting the context of the invention. The subject matter discussed in the background of the invention section should not be assumed to be prior art merely as a result of its mention in the background of the invention section. Similarly, a problem mentioned in the background of the invention section or associated with the subject matter of the background of the invention section should not be assumed to have been previously recognized in the prior art. The subject matter in the background of the invention section merely represents different approaches, which in and of themselves may also be inventions. Work of the presently named inventors, to the extent it is described in the background of the invention section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the invention.
Microstructured materials are aggregates of individual components. Although many approximations exist, high-fidelity modeling of a complex material typically involves building a numerical model of its microstructure. In the field of computational mechanics, virtual tests for the microstructure responses are often carried out using high-fidelity numerical methods, such as the Finite Element Method (FEM), Finite Volume Method (FVM), Fast Fourier Transform (FFT), and mesh free. This process usually requires building an explicit mesh in software for the microstructure, where the mesh is the virtual representation of the microstructure. This is typically a spatial decomposition where the elements that make up the discrete units of the decomposition follow specific rules (e.g., non-negative Jacobian for FEM, fixed spacing grid in FFT). Although there are differences, these spatial discretizations are simply referred as a “meshes.” In order to capture all the details of the microstructure, the mesh usually requires very fine resolution. This discretization of the material based on space is similar to the concept of a Riemann integral, as illustrated in
Once the mesh for a microstructure is constructed, loads can be applied to the mesh to perform virtual testing. Various information can be obtained from this virtual testing, such as elastic properties of the material, effective strength of the microstructure, and local stress distributions to study possible microstructure failure. However, resolving a fine mesh requires significant computational resource, such as High Performance Cluster computing or GPU computing, and a long execution time for the finite element software.
For a Microstructure Volume Element (a representation of the complex material microstructure), or MVE in short, at any given scale, the local and homogenized behavior at the current scale might be computed as a function of the homogenized behavior at subscales using traditional numerical methods, such as finite element analysis. However, the computation time is proportional to the total number of elements across all scales. Each added scale compounds the difficulty. Therefore, the computation time can be high for even a simple 2-scale model if the MVE contains many elements—a typical MVE might contain O(1e6) elements, and this would have to be solved for each element at the larger scale. This means cost is very high. When done with finite elements for two scales, this is sometimes called an “FE2”. Because of the cost, there is limited practical application of such schemes, and they have seen little use.
Therefore, a heretofore unaddressed need exists in the art to address the aforementioned deficiencies and inadequacies.
SUMMARY OF THE INVENTIONIn order to counter the time-consuming microstructure simulation, one of the objectives of this invention is to develop a data-driven domain decomposition approach that is suitable to accelerating the numerical simulation of the microstructure responses.
In one aspect, the invention relates to a method for design optimization and/or performance prediction of a material system. In one embodiment, the method includes generating a representation of the material system at a number of scales, wherein the representation at a scale comprises microstructure volume elements (MVE) that are building blocks of the material system at said scale; collecting data of response fields of the MVE computed from a material model of the material system over a predefined set of material properties and boundary conditions; applying machine learning to the collected data of response fields to generate clusters that minimize a distance between points in a nominal response space within each cluster; computing an interaction tensor of interactions of each cluster with each of the other clusters; manipulating the governing partial differential equation (PDE) using Green's function to form a generalized Lippmann-Schwinger integral equation; and solving the integral equation using the generated clusters and the computed interactions to result in a response prediction that is usable for the design optimization and/or performance prediction of the material system.
In one embodiment, the method further comprises passing the resulted response prediction to a next coarser scale as an overall response of that building block, and iterating the process until a final scale is reached.
In one embodiment, the building blocks are defined by material properties and structural descriptors obtained by modeling or experimental observations and encoded in a domain decomposition of structures for identifying locations and properties of each phase within the building blocks.
In one embodiment, the structural descriptors comprise characteristic length and geometry.
In one embodiment, the boundary conditions are chosen to satisfy the Hill-Mandel condition.
In one embodiment, the collected data of response fields comprise a strain concentration tensor, a deformation concentration tensor, stress tensor (e.g., PK-I stress, Cauchy stress), plastic strain tensor, thermal gradient, or the like.
In one embodiment, the machine learning comprises unsupervised machine learning and/or supervised machine learning.
In one embodiment, the machine learning is performed with a self-organizing mapping (SOM) method, a k-means clustering method, or the like.
In one embodiment, the clusters are generated by marking all material points that have the same response field within the representation of the material system with a unique ID and grouping material points with the same ID.
In one embodiment, the generated clusters are a reduced representation of the material system, which reduces the number of degrees of freedom required to represent the material system.
In one embodiment, the generated clusters are a reduced order MVE of the material system.
In one embodiment, the computed interaction tensor is for all pairs of the clusters.
In one embodiment, said computing of the interaction tensor is performed with fast Fourier transform (FFT), numerical integration, or finite element method (FEM).
In one embodiment, the PDE is reformulated as a Lippmann-Schwinger equation. In one embodiment, said solving of the PDE is performed with arbitrary boundary conditions and/or material properties.
In one embodiment, the collected data of response fields, the generated clusters, and/or the computed interaction tensor are saved in one or more material system databases.
In one embodiment, said solving of the PDE is performed in real-time by accessing the one or more material system databases for the generated clusters and the computed interactions.
In another aspect, the invention relates to a method for design optimization and/or performance prediction of a material system. In one embodiment, the method includes performing an offline data compression, wherein original MVE of building blocks of the material system are compressed into clusters, and an interaction tensor of interactions of each cluster with each of the other clusters is computed; and solving an governing PDE using the clusters and the computed interactions to result in a response prediction that is usable for the design optimization and/or performance prediction of the material system.
In one embodiment, the method further includes passing the resulting response prediction to a next coarser scale as an overall response of that building block, and iterating the process until a final scale is reached.
In one embodiment, the building blocks are defined by material properties and structural descriptors obtained by modeling or experimental observations and encoded in a domain decomposition of structures for identifying locations and properties of each phase within the building blocks.
In one embodiment, the structural descriptors comprise characteristic length and geometry.
In one embodiment, when more than one scale is involved with the reduced order response prediction, the method is named Multiresolution Clustering Analysis (MCA).
In one embodiment, the boundary conditions are chosen to satisfy the Hill-Mandel condition.
In one embodiment, said performing the offline data compression comprises collecting data of response fields of the MVE computed from a material model of the material system over a predefined set of material properties and boundary conditions; applying machine learning to the collected data of response fields to generate clusters that minimize a distance between points in a nominal response space within each cluster; and computing the interaction tensor is for all pairs of the clusters.
In one embodiment, the collected data of response fields comprise a strain concentration tensor, a deformation concentration tensor, stress tensor (e.g., PK-I stress, Cauchy stress), plastic strain tensor, thermal gradient, or the like.
In one embodiment, the machine learning comprises unsupervised machine learning and/or supervised machine learning.
In one embodiment, the machine learning is performed with an SOM method, a k-means clustering method, or the like.
In one embodiment, the clusters are generated by marking all material points having the same response field within the representation of the material system with a unique ID and grouping material points with the same ID.
In one embodiment, the clusters are a reduced representation of the material system, which reduces the number of degrees of freedom required to represent the material system.
In one embodiment, the clusters are a reduced order MVE of the material system.
In one embodiment, said computing the interaction tensor is performed with FFT, numerical integration, or FEM.
In one embodiment, the PDE is a Lippmann-Schwinger equation. In one embodiment, said solving the PDE is performed with arbitrary boundary conditions and material properties.
In one embodiment, the collected data of response fields, the generated clusters, and/or the computed interaction tensor are saved in one or more material system databases.
In one embodiment, said solving the PDE is performed with online accessing the one or more material system databases for the generated clusters and the computed interactions.
In yet another aspect, the invention relates to a non-transitory tangible computer-readable medium storing instructions which, when executed by one or more processors, cause a system to perform the above-disclosed method for design optimization and/or performance prediction of a material system.
In a further aspect, the invention relates to a computational system for design optimization and/or performance prediction of a material system. In one embodiment, the computational system includes one or more computing devices comprising one or more processors; and a non-transitory tangible computer-readable medium storing instructions which, when executed by the one or more processors, cause the one or more computing devices to perform the above-disclosed method for design optimization and/or performance prediction of a material system.
In one aspect, the invention relates to a material system database usable for conducting efficient and accurate multiscale modeling of a material system, In one embodiment, the material system database includes clusters for a plurality of material systems, each of which groups all material points having a same response field within MVE of a respective material system with a unique ID; interaction tensors, each of which represents interactions of all pairs of the clusters for the respective material system; and response predictions computed based on the clusters and the interaction tensors.
In one embodiment, the clusters are generated by applying machine learning to data of response fields of the MVE computed from a material model of the respective material system over a predefined set of material properties and boundary conditions.
In one embodiment, the interaction tensors are computed with FFT, numerical integration, or FEM
In one embodiment, the responses predictions are obtained by solving a governing PDE using the clusters and the computed interactions. In one embodiment, the responses predictions comprise at least effective stiffness, yield strength, thermal conductivity, damage initiation, and FIP.
In one embodiment, the material system database is configured such that some of the response predictions are assigned as a training set for training a different machine learning that connects processes/structures to responses/properties of the material system directly without going through the clustering and interaction computing processes at all; and some or all of the remaining response predictions are assigned as a validation set for validating the efficiency and accuracy of the multiscale modeling of the material system.
In one embodiment, the material system database is generated with predictive reduced order models. In one embodiment, the predictive reduced order models comprise a self-consistent clustering analysis (SCA) model, a virtual clustering analysis (VCA) model, and/or an FEM clustering analysis (FCA) model.
In one embodiment, the material system database is updatable, editable, accessible, and searchable.
In another aspect, the invention relates to a method of applying the above-disclosed material system database for design optimization and/or performance prediction of a material system. In one embodiment, the method includes training a neural network with data of the material system database; and predicting real-time responses during a loading process performed using the trained neueral network, wherein the real-time responses are used for the design optimization and/or performance prediction of a material system.
In one embodiment, the method further includes performing a topology optimization to design a structure with microstructure information.
In one embodiment, the neural network comprises a feed forward neural network (FFNN) and/or a convolutional neural network (CNN).
In yet another aspect, the invention relates to a non-transitory tangible computer-readable medium storing instructions which, when executed by one or more processors, cause a system to perform the above method for design optimization and/or performance prediction of a material system.
These and other aspects of the invention will become apparent from the following description of the preferred embodiment taken in conjunction with the following drawings, although variations and modifications therein may be affected without departing from the spirit and scope of the novel concepts of the invention.
The following drawings form part of the present specification and are included to further demonstrate certain aspects of the invention. The invention may be better understood by reference to one or more of these drawings in combination with the detailed description of specific embodiments presented herein. The drawings described below are for illustration purposes only. The drawings are not intended to limit the scope of the present teachings in any way.
FIG. shows rubber matrix material hidden according to embodiments of the invention.
The present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments of the present invention are shown. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Like reference numerals refer to like elements throughout.
The terms used in this specification generally have their ordinary meanings in the art, within the context of the invention, and in the specific context where each term is used. Certain terms that are used to describe the invention are discussed below, or elsewhere in the specification, to provide additional guidance to the practitioner regarding the description of the invention. For convenience, certain terms may be highlighted, for example using italics and/or quotation marks. The use of highlighting and/or capital letters has no influence on the scope and meaning of a term; the scope and meaning of a term are the same, in the same context, whether or not it is highlighted and/or in capital letters. It will be appreciated that the same thing can be said in more than one way. Consequently, alternative language and synonyms may be used for any one or more of the terms discussed herein, nor is any special significance to be placed upon whether or not a term is elaborated or discussed herein. Synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification, including examples of any terms discussed herein, is illustrative only and in no way limits the scope and meaning of the invention or of any exemplified term. Likewise, the invention is not limited to various embodiments given in this specification.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed below can be termed a second element, component, region, layer or section without departing from the teachings of the present invention.
It will be understood that, as used in the description herein and throughout the claims that follow, the meaning of “a”, “an”, and “the” includes plural reference unless the context clearly dictates otherwise. Also, it will be understood that when an element is referred to as being “on,” “attached” to, “connected” to, “coupled” with, “contacting,” etc., another element, it can be directly on, attached to, connected to, coupled with or contacting the other element or intervening elements may also be present. In contrast, when an element is referred to as being, for example, “directly on,” “directly attached” to, “directly connected” to, “directly coupled” with or “directly contacting” another element, there are no intervening elements present. It will also be appreciated by those of skill in the art that references to a structure or feature that is disposed “adjacent” to another feature may have portions that overlap or underlie the adjacent feature.
It will be further understood that the terms “comprises” and/or “comprising,” or “includes” and/or “including” or “has” and/or “having” when used in this specification specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, regions, integers, steps, operations, elements, components, and/or groups thereof.
Furthermore, relative terms, such as “lower” or “bottom” and “upper” or “top,” may be used herein to describe one element's relationship to another element as illustrated in the figures. It will be understood that relative terms are intended to encompass different orientations of the device in addition to the orientation shown in the figures. For example, if the device in one of the figures is turned over, elements described as being on the “lower” side of other elements would then be oriented on the “upper” sides of the other elements. The exemplary term “lower” can, therefore, encompass both an orientation of lower and upper, depending on the particular orientation of the figure. Similarly, if the device in one of the figures is turned over, elements described as “below” or “beneath” other elements would then be oriented “above” the other elements. The exemplary terms “below” or “beneath” can, therefore, encompass both an orientation of above and below.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the present invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used in this disclosure, “around”, “about”, “approximately” or “substantially” shall generally mean within 20 percent, preferably within 10 percent, and more preferably within 5 percent of a given value or range. Numerical quantities given herein are approximate, meaning that the term “around”, “about”, “approximately” or “substantially” can be inferred if not expressly stated.
As used in this disclosure, the phrase “at least one of A, B, and C” should be construed to mean a logical (A or B or C), using a non-exclusive logical OR. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
The methods and systems will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, components, circuits, processes, algorithms, etc. (collectively referred as “members”). These members may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. By way of example, a member, or any portion of an member, or any combination of members may be implemented as a “processing system” that includes one or more processors. Examples of processors include microprocessors, microcontrollers, graphics processing units (GPUs), central processing units (CPUs), application processors, digital signal processors (DSPs), reduced instruction set computing (RISC) processors, systems on a chip (SoC), baseband processors, field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
Accordingly, in one or more example embodiments, the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise a random-access memory (RAM), a read-only memory, an electrically erasable programmable read-only memory (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the aforementioned types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
The description below is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. The broad teachings of the invention can be implemented in a variety of forms. Therefore, while this invention includes particular examples, the true scope of the invention should not be so limited since other modifications will become apparent upon a study of the drawings, the specification, and the following claims. For purposes of clarity, the same reference numbers will be used in the drawings to identify similar elements. It should be understood that one or more steps within a method may be executed in different order (or concurrently) without altering the principles of the invention.
Microstructured materials are aggregates of individual components. In multi-component materials, the components, which are considered as material building blocks, aggregate or self-assemble to form a complex structures or conformations at multiple scales. Multiscale modeling methods attempt to faithfully capture the emergent complex behaviors on several length- and time-scales.
One of the objectives of this invention is to provide data-driven representation and clustering discretization methods and systems, which is a data-driven domain decomposition approach that is suitable to accelerate the numerical simulation of the microstructure responses. The concept here is similar to the Lebesgue Integral, illustrated in
In one aspect of the invention, a method for design optimization and/or performance prediction of a material system includes generating a representation of the material system at a number of scales, wherein the representation at a scale comprises microstructure volume elements (MVE) that are building blocks of the material system at said scale; collecting data of response fields of the MVE computed from a material model of the material system over a predefined set of material properties and boundary conditions; applying machine learning to the collected data of response fields to generate clusters that minimize a distance between points in a nominal response space within each cluster; computing an interaction tensor of interactions of each cluster with each of the other clusters; and solving an governing partial differential equation (PDE) using the generated clusters and the computed interactions to result in a response prediction that is usable for the design optimization and/or performance prediction of the material system.
In one embodiment, the method further comprises passing the resulting response prediction to a next coarser scale as an overall response of that building block, and iterating the process until a final scale is reached.
In one embodiment, the building blocks are defined by material properties and structural descriptors obtained by modeling or experimental observations and encoded in a domain decomposition of structures for identifying locations and properties of each phase within the building blocks.
In one embodiment, the structural descriptors comprise characteristic length and geometry.
In one embodiment, the boundary conditions are chosen to satisfy the Hill-Mandel condition.
In one embodiment, the collected data of response fields comprise a strain concentration tensor, a deformation concentration tensor, stress tensor (e.g., PK-I stress, Cauchy stress), plastic strain tensor, thermal gradient, or the like.
In one embodiment, the machine learning comprises unsupervised machine learning and/or supervised machine learning.
In one embodiment, the machine learning is performed with a self-organizing mapping (SOM) method, a k-means clustering method, or the like.
In one embodiment, the clusters are generated by marking all material points that have the same response field within the representation of the material system with a unique ID and grouping material points with the same ID.
In one embodiment, the generated clusters are a reduced representation of the material system, which reduces the number of degrees of freedom required to represent the material system.
In one embodiment, the generated clusters are a reduced order MVE of the material system.
In one embodiment, the computed interaction tensor is for all pairs of the clusters.
In one embodiment, said computing of the interaction tensor is performed with fast Fourier transform (FFT), numerical integration, or finite element method (FEM).
In one embodiment, the PDE is a Lippmann-Schwinger equation. In one embodiment, said solving of the PDE is performed with arbitrary boundary conditions and/or material properties.
In one embodiment, the collected data of response fields, the generated clusters, and/or the computed interaction tensor are saved in one or more material system databases.
In one embodiment, said solving of the PDE is performed in real-time by accessing the one or more material system databases for the generated clusters and the computed interactions.
In another aspect of the invention, a method for design optimization and/or performance prediction of a material system includes performing an offline data compression, wherein original microstructure volume elements (MVE) of building blocks of the material system are compressed into clusters, and an interaction tensor of interactions of each cluster with each of the other clusters is computed; and solving an governing PDE using the clusters and the computed interactions to result in a response prediction that is usable for the design optimization and/or performance prediction of the material system.
In one embodiment, the method further includes passing the resulting response prediction to a next coarser scale as an overall response of that building block, and iterating the process until a final scale is reached.
In one embodiment, the building blocks are defined by material properties and structural descriptors obtained by modeling or experimental observations and encoded in a domain decomposition of structures for identifying locations and properties of each phase within the building blocks.
In one embodiment, the structural descriptors comprise characteristic length and geometry.
In one embodiment, when more than one scale is involved with the reduced order response prediction, the method is named Multiresolution Clustering Analysis (MCA).
In one embodiment, the boundary conditions are chosen to satisfy the Hill-Mandel condition.
In one embodiment, said performing the offline data compression comprises collecting data of response fields of the MVE computed from a material model of the material system over a predefined set of material properties and boundary conditions; applying machine learning to the collected data of response fields to generate clusters that minimize a distance between points in a nominal response space within each cluster; and computing the interaction tensor is for all pairs of the clusters.
In one embodiment, the collected data of response fields comprise a strain concentration tensor, a deformation concentration tensor, stress tensor (e.g., PK-I stress, Cauchy stress), plastic strain tensor, thermal gradient, or the like.
In one embodiment, the machine learning comprises unsupervised machine learning and/or supervised machine learning.
In one embodiment, the machine learning is performed with an SOM method, a k-means clustering method, or the like.
In one embodiment, the clusters are generated by marking all material points having the same response field within the representation of the material system with a unique ID and grouping material points with the same ID.
In one embodiment, the clusters are a reduced representation of the material system, which reduces the number of degrees of freedom required to represent the material system.
In one embodiment, the clusters are a reduced order MVE of the material system.
In one embodiment, said computing the interaction tensor is performed with FFT, numerical integration, or FEM.
In one embodiment, the PDE is a Lippmann-Schwinger equation. h one embodiment, said solving the PDE is performed with arbitrary boundary conditions and material properties.
In one embodiment, the collected data of response fields, the generated clusters, and/or the computed interaction tensor are saved in one or more material system databases.
In one embodiment, said solving the PDE is performed with online accessing the one or more material system databases for the generated clusters and the computed interactions.
In yet another aspect, the invention relates to a material system database usable for conducting efficient and accurate multiscale modeling of a material system, In one embodiment, the material system database includes clusters for a plurality of material systems, each of which groups all material points having a same response field within MVE of a respective material system with a unique ID; interaction tensors, each of which represents interactions of all pairs of the clusters for the respective material system; and response predictions computed based on the clusters and the interaction tensors.
In one embodiment, the clusters are generated by applying machine learning to data of response fields of the MVE computed from a material model of the respective material system over a predefined set of material properties and boundary conditions.
In one embodiment, the interaction tensors are computed with FFT, numerical integration, or FEM.
In one embodiment, the responses predictions are obtained by solving a governing PDE using the clusters and the computed interactions. In one embodiment, the responses predictions comprise at least effective stiffness, yield strength, thermal conductivity, damage initiation, and FIP.
In one embodiment, the material system database is configured such that some of the response predictions are assigned as a training set for training a different machine learning that connects processes/structures to responses/properties of the material system directly without going through the clustering and interaction computing processes at all; and some or all of the remaining response predictions are assigned as a validation set for validating the efficiency and accuracy of the multiscale modeling of the material system.
In one embodiment, the material system database is generated with predictive reduced order models. In one embodiment, the predictive reduced order models comprise a self-consistent clustering analysis (SCA) model, a virtual clustering analysis (VCA) model, and/or an FEM clustering analysis (FCA) model.
In one embodiment, the material system database is updatable, editable, accessible, and searchable.
In a further aspect, the invention relates to a method of applying the above-disclosed material system database for design optimization and/or performance prediction of a material system. In one embodiment, the method includes training a neural network with data of the material system database; and predicting real-time responses during a loading process performed using the trained neueral network, wherein the real-time responses are used for the design optimization and/or performance prediction of a material system.
In one embodiment, the method further includes performing a topology optimization to design a structure with microstructure information.
In one embodiment, the neural network comprises a feed forward neural network (FFNN) and/or a convolutional neural network (CNN).
In one aspect, the invention relates to a non-transitory tangible computer-readable medium storing instructions which, when executed by one or more processors, cause a system to perform the above-disclosed methods for design optimization and/or performance prediction of a material system.
In another aspect, the invention relates to a computational system for design optimization and/or performance prediction of a material system. In one embodiment, the computational system includes one or more computing devices comprising one or more processors; and a non-transitory tangible computer-readable medium storing instructions which, when executed by the one or more processors, cause the one or more computing devices to perform the above-disclosed method for design optimization and/or performance prediction of a material system.
The advantages and specific applications of the invented methods and systems are briefed as follows, while the details of them are discussed in EXAMPLES 1-9 following the section.
A methodology for constructing microstructure material databases for fast microstructure-derived response prediction has been developed. The method takes materials data and compresses it using unsupervised machine learning to create a “clustering discretization” in effect a specially designed database that is suitable to conduct efficient and accurate multiscale modeling of arbitrary material systems. Because the method described here relies on capturing and combining the fundamental building blocks of the material and its response, any complex and/or hierarchical material systems can be accurately and efficiently modeled, thus, the method is material agnostic. The method described here can also be applicable to prediction of many physical phenomena that share similar underlying mathematical descriptions. While the examples provided are related to the material behavior of solids subjected to mechanical loads, this should not be thought of as a fundamental restriction; the method could well be applied to predict a range of effective physics, including, but not limited to, electric and magnetic properties. In effect, the method might be used to solve arbitrary computational homogenization problems, or problems involving the solution of partial differential equations.
A multiscale analysis is envisioned to include models that capture material behavior at each relevant length scale passing information to the next higher length scale—until the final scale is reached. This appears on the far left of
The next step shown in
For an example in composite materials, in the simplest form, the final scale might be a component (scale 0, the largest) that is built with unidirectional carbon fiber reinforced composite (scale 1). At scale 1, a microstructure model to compute physical material responses is required. Once computed, this response is provided to scale 0 as the element-level behavior. This is illustrated in
Metals are another example of materials displaying functional and structural hierarchy. In one simple case, metal might be represented by a homogenized behavior at scale 0, while scale 1 describes the behavior at the granular level. The collective behavior of many grains works to generate the homogenized response observed in scale 0, see, e.g., EXAMPLE 2. Another might be that of defects (inclusions, voids) within each grain, and yet another the scale of precipitates and dislocations.
According to the invention, the core of the method is composed of the follow three steps, outlined as the top row in
(1) Data collection using the high-fidelity MVE. This operation uses a high-fidelity approach with specifically crafted boundary conditions and properties. This gives an indicator of the material response of the MVE, for each MVE. The data collected for each MVE are called “response fields,” implying that there is not necessarily a unique choice of material response required. Often elastic response is desirable for its simplicity, but the method can use any response field deemed appropriate (for example, plastic strain is another possibility). This data is saved in a database.
(2) Perform unsupervised learning on the response data obtained in step (1). This process marks all points within the MVE with a unique ID, such as 1, 2, 3, etc. Thence the description of the MVE behavior can rely on the groups of points with the same ID, rather than the points themselves. We call each group a “cluster.” This is illustrated in
(3) Compute the interaction tensor for all pairs of clusters (rather than points). This process computes pairwise interaction tensors for all clusters. This allows one to compute behavior of each cluster when some forcing term is applied on the MVE. At this step, the original MVE has been completely replaced by the “compressed” MVE mathematically described with only clusters, not spatial points/elements.
Above three-step approach is depicted in further detail in
The reduction of complexity is illustrated in
The superior efficiency can also be used to build a complex material microstructure response database using the reduced order model (ROM), e.g., EXAMPLE 1. One possible use for such a database is for training a physics based neural network, which provides almost instantaneous microstructure responses under arbitrary external loading condition, as shown schematically in FIG. 10. The physics based neural network can be used for structure optimization and design, as suggested in EXAMPLE 1. Moreover, the methodology described is suitable for materials and systems design for desired performance indices, as illustrated in
In
Referring to
According to the invention, the process is fast, see, e.g., EXAMPLES 5-8, among others, as shown in Table 1.
It is also efficient, based on the following justification (EXAMPLES 5-8). The efficiency comes from the fact that the complexity of the problem is reduced significantly after the data compression. The results present in EXAMPLE 5, illustrate the saving of computational cost, as shown in Table 5-4. The present method provides considerable reduction in terms of computer memory needed and computational time needed, compared to the traditional approaches, such as FEM and FFT.
Further, the methos is also accurate, as demonstrated in Table 1 and shown in
Moreover, the method can be applied to materials with one or more length scales, as provided in EXAMPLES 8 and 9.
In addition, the method can provide multiscale modeling capability for arbitrary number of scales. In EXAMPLE 9, the method has been applied to woven composite, which is made of yarn (with microstructure similar to UD composite illustrated in
In one embodiment, the woven microstructure database is used to perform a woven shear simulation, the loading direction is given in
As shown in
According to the invention, the method can also utilize arbitrary shapes within fixed bounds shown in EXAMPLES 8 and 9. As shown in EXAMPLE 9, the method can be applied to both fiber reinforced composite and woven composite, where the fiber reinforce composite MVE has been illustrated in
The method can be applied to arbitrary number of phases/constituents (EXAMPLES 4-5). In certain embodiments, the method has been applied to 2-phase and 3-phase filled rubber composite, as illustrated in EXAMPLE 5 and
The method is predictive (solutions provided outside the bounds of known data), see EXAMPLES 3 and 9. In a certain embodiment, the method is implemented for concurrent modeling for UD Carbon Fiber Reinforced Polymer (CFRP) as described in EXAMPLES 4 and 11. The predictive capability is illustrated in
Additionally, the method is also descriptive (provides homogenized and full field information), as shown in EXAMPLES 3 and 8.
Furthermore, the method could be implemented in hierarchical modeling schemes, see EXAMPLE 6. The method could also be implemented for concurrent modeling schemes, as discussed in EXAMPLES 3 and 8, in which several case studies for concurrent capture of macroscale physical field evolution and microscale physical field evolution are presented, and for combined hierarchical and concurrent modeling schemes (EXAMPLE 7), as explained in
Other applications of the method include materials design (material phase selection), as illustrated in
EXAMPLES 3 and 8 disclose embodiments of the invention for microstructure design. In addition, the method can be used with any underlying data representation: images, particles, meshfree, finite element meshes, etc., see EXAMPLES 2 and 5-6. The case shown in
In one embodiment, the method can be used with machine learning for expedited efficiency (GPU and different NNs), e.g., EXAMPLE 1. The method can also be used to create a microstructure response database that contains many pairs of stress and strain states. The database can then be used as an input to the supervised learning algorithm, one type of machine learning method, to training a Feed Forward Neural Network (FFNN) that can further improve the efficiency. The FFNN can predict stress state given arbitrary strain state that is within the microstructure response database. As shown in Table 1-5 a speed-up of 10000 can be achieved with FFNN.
In certain embodiments, data-driven materials and structure design are achieved with machine learning techniques. As presented in EXAMPLE 1, a microstructure-based optimization is formulated. It allows to perform a topology optimization to design a structure with microstructure damage information, as shown in
Process-structure-property-performance for different processes (Additive Manufacturing (AM), composite, polymer matrix composite (PMC)), leading to system design including experiments to predictions can be found in EXAMPLE 6.
In addition, multiscale structure-property materials and structures design are illustrated by two examples (composites and alloys) in
For composite materials, such as woven composites, the method can be used for fast prediction of the overall material yield surface, as described in EXAMPLE 7. The method can create a ROM for the woven MVE, which provides efficient prediction of woven composite elasto-plastic material responses. As shown in panel (a) of
In addition, the method can be used to generate a vast woven composite yield surfaces based on different microstructure and material properties of yarn and matrix, providing an enlarged material design space while minimizing computational cost.
Without intent to limit the scope of the invention, examples according to the embodiments of the present invention are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the invention. Moreover, certain theories are proposed and disclosed herein; however, in no way they, whether they are right or wrong, should limit the scope of the invention so long as the invention is practiced according to the invention without regard for any particular theory or scheme of action.
Example 1 Clustering Discretization Methods for Generation of Material Performance Databases in Machine Learning and Design OptimizationMechanical science and engineering can use machine learning. However, data sets have remained relatively scarce; fortunately, known governing equations can supplement these data. This exemplary study summarizes and generalizes three reduced order methods: self-consistent clustering analysis, virtual clustering analysis, and FEM-clustering analysis. These approaches have two-stage structures: unsupervised learning facilitates model complexity reduction and mechanistic equations provide predictions. These predictions define databases appropriate for training neural networks. The feed forward neural network solves forward problems, e.g., replacing constitutive laws or homogenization routines. The convolutional neural network solves inverse problems or is a classifier, e.g., extracting boundary conditions or determining if damage occurs. In this example, we explain how these networks are applied, then provide a practical exercise: topology optimization of a structure (a) with non-linear elastic material behavior and (b) under a microstructural damage constraint. This results in microstructure-sensitive designs with computational effort only slightly more than for a conventional linear elastic analysis.
Computational methods in materials mechanics have evolved with the development of computation tools. A recent advance in computer sciences is the development of the so-called “Big Data” era, where a combined explosion in the number of sensors and datapoints along side computational resources and methods have enabled tracking and using large databases to develop understanding of the world, often replacing smaller more targeted studies that may produce less generalizable results or lack key insights. Taken in the context of computational mechanics, we can develop data-driven computational tools that rely on vast amounts of background data to facilitate, e.g., real-time multiscale simulations for fast multistage material system design, in-the-loop mechanics for controls (e.g., in manufacturing).
In order to develop such data-driven computational tools, two primary areas of study have emerged: (1) generation of materials system databases for materials mechanics, typically using data compression, to reduce computational complexity; and (2) utilization of the database for real-time response prediction and multi-stage design.
The first area arises because data science relies heavily on the size and reliability of the database available. Unlike traditional applications in data science, such as image detection/recognition or automatic control, extremely large and well defined dataset are generally unavailable, or merely inaccessible, in computational mechanics. Whether databases are constructed from experiments or computational models, the cost to generate data at a scale used in, e.g., image recognition has thus far been largely insurmountable. One approach to this challenge has been to use multiscale simulations of material systems using fast calculations of the overall stress of a representative volume element (RVE) for arbitrary far-field deformation loading. Many methods have been developed with the goal of finding an appropriate balance between cost and accuracy for such a problem; these are generally referred to as reduced order methods (ROMs), and many have been developed. In certain embodiments, RVE is a specific case of the MVE which will result in a converged solution of the microstructure it represented. The method can be applied to a RVE, but also to a MVE to cover a wide range of material systems. The second area has often been addressed with methods from machine learning and neural networks to provide real-time prediction and multi-stage design. Data-driven methods have also been used to enhance computational mechanics by, for example, optimizing numerical quadrature and replacing empirical constitutive laws with experiment data. Recently, a deep material network method was proposed which mimics neural networks topologically to link the micro material stiffness to the macro material stiffness. Once trained on a pre-simulated micro-macro stiffness database it can be used to compute, with significant speed-up, the overall stress of an RVE under arbitrary far-field deformation loading.
In this exemplary study, three techniques in modeling microstructure based on data mining are explored and generalized first. These kinds of fast methods address the first area: they can be used to generate the type of very large databases required for the pure or mechanics-enhanced machine learning. The workings of two different classes of neural networks are then derived. Next, an engineering application for these networks, topology optimization considering microstructure, is explored with detailed examples. Then some possible future directions for data-driven computational approaches are outlined to inspire further research in this emerging field within computational mechanics.
Two-Stage Clustering Analysis MethodsThe Self-consistent Clustering Analysis (SCA) and its close relatives Virtual Clustering Analysis (VCA) and FEM Clustering Analysis (FCA) are two-stage reduced order modeling approaches including an offline data compression process and an online prediction process. This is concisely illustrated in
To define the boundary value problem, consider a material occupying Ω⊂d. The goal of homogenization is to find the macroscopic constitutive relation between a macroscopic stress
and a macroscopic strain
where |Ω| is the total volume of the region. We define mathematically the RVE problem as
where σ=σ(ε; X) is a general microscale constitutive law. For the homogenization problem, boundary conditions have to be chosen to satisfy the Hill-Mandel condition. In this exemplary study, the periodic boundary conditions are used: ε periodic and σ·n anti-periodic on ∂.
Continuous and Discretized Lippmann-Schwinger Equation
By introducing a reference material with distributed elastic stiffness {tilde over (C)}(X), it has been shown that the RVE problem with periodic boundary conditions is equivalent to the integral equation
ε(X)={tilde over (ε)}(X)−∫{tilde over (Γ)}(X,X′):(σ(X′)−{tilde over (C)}(X′):ε(X′))dX′ (1-4)
where {tilde over (ε)}(X) is the strain in the reference material when applying the same loading and boundary conditions as the original RVE problem; σ(X′)−{tilde over (ε)}(X):ε(X′) is the eigenstress applied to the reference material; {tilde over (Γ)}(X,X′) is the Green's operator associated with the reference material. The physical meaning of −{tilde over (Γ)}ijkl(X,X′) is the strain component εij at material point X if the unit eigenstress skl is applied at material point X′, with the components of skl defined by smnkl=δkmδln, where δkm and δln are Kronecker delta functions.
The integral equation given in Eq. (1-4) is known as the Lippmann-Schwinger equation, commonly seen to describe particle scattering in quantum mechanics. There is typically no explicit form of {tilde over (Γ)}(X,X′), unless the reference material is homogeneous.
The domain can be decomposed into several sub-regions, called clusters, distinguished mathematically by the characteristic function x as shown in Eq. (1-5).
where I=1, 2, 3, NC denotes each cluster, and ΩI is the subset of the volume within cluster I. These clusters are defined during the offline stage. This allows one to discretize the Lippmann-Schwinger equation, as:
εI={tilde over (ε)}I−ΣJ=1N
where
denotes the volume average of an arbitrary variable ▪ in the Ith cluster;
is the volume fraction of the Ith cluster where the volume of the Ith cluster is given by |ΩI|. DIJ is the interaction tensor given by
The physical meaning of −(DijklIJ) is the average strain component ij in the Ith cluster if the uniform unit eigenstress component kl is applied in the Jth cluster.
Remark 1: If homogeneous reference material is used, as in SCA and VCA, {tilde over (ε)}I=εM.
Remark 2: A counterpart of Eq. (1-6), similar to the idea of FCA, can be expressed as
σI={tilde over (σ)}I−ΣJ=1N
where {tilde over (S)} is the compliance matrix of the reference material; a is the stress in the reference material when applying the same loading and boundary conditions as the original RVE problem. εJ−{tilde over (S)}J:σJ is the volume average eigenstrain in the Jth cluster. The physical meaning of −BijklIJ is the average stress component ij in the Ith cluster if the uniform unit eigenstrain component kl is applied in the Jth cluster. Note that the reference material for FCA is the elastic state of the original RVE, instead of a homogeneous reference material as used by the other two methods here.
Remark 3: The relationship between BIJ and DIJ is given by
{tilde over (S)}I:BIJ=−DIJ:{tilde over (C)}J. (1-9)
This can be proven by noting that the effect of applying any unit eigenstrain kl in some cluster J of the reference material is equivalent to that of applying eigenstress −{tilde over (C)}:ekl in the same cluster.
In the online stage, SCA solves the incremental, discretized Lippmann-Schwinger equation, Eq. (1-6), with arbitrary external loading conditions. These can either be of the fixed strain increment εM type or be of the fixed stress increment σM type.
Offline: Clustering and Interaction Tensor
The offline stage includes three primary steps: (1) data collection, (2) unsupervised learning (e.g., clustering), and (3) pre-computation of the interaction of clusters. Computation of the linear elastic response (data collection) and subsequent clustering based on that response are conducted identically for each of the three two-stage methods presented below: the same voxel mesh and clusters are used in the examples for all three methods. The difference between methods comes in the computation of the interaction tensor.
(1) Data Collection
Data collection provides information used to construct the reduced representation of the system. It typically involves some computation of the response of a fully resolved system, perhaps with a simplified material model, over a limited set of loading cases. One measure of mechanical response that could be collected is the strain concentration tensor used in micromechanics A, defined by ε(X)=A(X):εM, which maps between the far-field or applied strain, εM and the strain measured at point X in the domain, ε(X).
The choice of the strain concentration tensor as given above is often suitable, but it depends on the relevant details of the problem. For example, at finite deformations one might consider using the deformation concentration tensor:
where F(X) is the deformation gradient at any given point X within the domain and F0 is the macroscopic deformation corresponding to the boundary conditions. Alternatively, if the elastic and plastic material responses differ substantially, e.g., if one is isotropic and the other anisotropic, including information about the plastic part of the deformation might be desirable.
(2) Clustering
The goal of clustering is to reduce the number of degrees of freedom required to represent the system while minimizing the loss of information about the mechanical response. One way of doing this is by grouping material points within the domain of interest. If one can assume that the material response within each group is identical, the evolution of the domain can be determined by solving for the response of each group rather than each material point.
Using the data generated during the collection phase, one of the many clustering (or unsupervised learning) techniques might be applied to optimize the domain decomposition. In the following examples, Self-Organizing Maps (SOMs) are employed, as illustrated in
The k-means clustering method has also been used. More elaborate clustering schemes might also be considered. Ongoing efforts include “adaptive clustering” schemes that mimic adaptive FE methods in their ability to evolve as deformation progresses, and “enriched” machine learning, whereby a priori information from mechanics about the deformation fields outside the bounds of the data collected in Step (1) are used to guide or bound the unsupervised learning. Another unexplored future direction might use a “feature-based” machine learning that includes microstructure information in addition to mechanics information.
(3) Interaction Tensor
The interaction tensor describes the impact each cluster has on each of the other clusters. Once the clustering process is completed, the interaction tensor can be explicitly computed. Importantly, the integral part only has to be computed once during the offline stage. Only the results of that calculation are then used for the online stage. Three ways to compute the interaction tensor, one used by each of the methods highlighted here (though these are not exclusive to each), are:
A. SCA: Fourier Transform for DIJ
With periodic boundary conditions and homogeneous reference material, the Green's operator has a simple expression in Fourier space, given by
where {circumflex over (Γ)}ijkl0=(Γ0) is the Fourier transform of a periodic Green's operator Γ0; λ0 and μ0 are the Lamé's constants of the homogeneous stiffness tensor; is the Fourier point. Then the interaction tensor can be calculated with
using the fast Fourier transform (FFT) technique. The computational complexity is O((NC)2(NF)log(NF)), where NF is the number of Fourier points used in the FFT calculation.
B. VCA: Numerical Integration for DIJ
With an infinite homogeneous reference material, the Green's operator can be expressed in real space. Numerical integration is the most straightforward method to compute the integral equation given in Eq. (1-7). The computational complexity is O((N)2), where NI is the number of integration points used.
C. FCA: Finite Element Method for BIJ
Based on the physical interpretation of the interaction tensor, the finite element method can also be used. By applying uniform unit eigenstrain component kl in the Jth cluster, the average stress can be computed for all clusters, resulting in BijklIJ for all I=1, . . . , NC. Thus, the computational complexity is O(6(NC)(NE)), where NE is the number of finite elements used. The tensor BIJ is similar to the interaction tensor DIJ, although BIJ is determined by applying strains rather than stresses.
Online: Reduced Order Response Prediction
Once the interaction tensor database is prepared, the discretized Lippmann-Schwinger equation defined in Eq. (1-6) is solved in the online stage. An incremental form of Eq. (1-6) is given by
εI={tilde over (ε)}I−ΣJ=1N
where {tilde over (ε)}I is the applied incremental reference strain. The incremental stress σI is a function of the incremental strain εI according to the local material constitutive laws. So the unknowns of Eq. (1-12) are the strains in each cluster {ε}={ε1, . . . , εN
Then the Jacobian matrix
for the Newton's method is given by
where I4 is the fourth-order identity tensor. The tangent stiffness of the material in the Jth cluster is CalgJ. An alternative way to solve for the local mechanical responses is to minimize the complementary energy of the clustering-based system.
Comparison of the Methods
In order to compare the accuracy and efficiency of each method, a 2D plane strain model for a two-phase material is constructed and shown in
The equivalent von Mises stress is
A one-time data-compression of the microstructure is performed using the strain concentration tensor. The resulting clustering of the microstructure is shown in
The different methods, used to compute the interaction tensors, each results in a slightly different form of the tensor. There are strong similarities—after all, the same microstructure and clustering is used—though the details differ.
Once the microstructural database is created and the interaction tensor has been computed, the online prediction is performed.
Machine Learning on Databases Generated with Predictive ROMs
Neural networks are a specific class of machine learning algorithms, which in the most basic form appear similar to regression analysis. In practice, these methods involve modifying input data through a series of functions to obtain output data. The exact series of functions and their weights and forms depend on the application. These methods can provide an increase in speed over more conventional approaches, once the algorithm has been appropriately trained. In order to have a highly effective modeling approach based on machine learning, rich databases of mechanical response information are required to perform that training. Developing such a database with experiments is intractable, particularly for design of new materials or material systems where no material performance information exists. The fast, predictive models (SCA, VCA, FCA) outlined above are thus desirable for quickly populating relatively large materials databases. This enables the use of machine learning algorithms in multiscale design, where simultaneously application and satisfaction of criteria and constraints governing the material microstructure and component-level macrostructure is required.
Feed forward neural networks (FFNNs) were the first neural networks developed. The FFNN was designed to learn complex input-output relations. As such, FFNNs can be used to replace conventional constitutive laws; this is particularly appealing when descriptions of the homogenized behavior of a material is complex and/or difficult to obtain. The basic structure of an FFNN includes an input layer, hidden layers, and an output layer. Every pair of neurons in neighboring layers have a weighted connection. Each neuron in hidden layer and output layer has a bias. In FFNNs, neurons in the same layer are not connected. In the learning procedure, the connection weights are changed following a predefined set of rules, such as with back propagation. Funahashi and Hornik et al. proved that three hidden layers in an FFNN is sufficient to learn any non-linear continuous function.
Early efforts to apply machine learning to mechanics used a computation and knowledge representation paradigm, which is actually a type of feed forward neural network, to directly “learn” material behavior by training from analytic and experimental data. One early work applied a back-propagation neural network to model the behavior of concrete in plane stress under monotonic biaxial loading and compressive uniaxial cyclic loading. Once an RVE model is established, for example, an FFNN can be trained on that data, and a concurrent multiscale scheme to directly connect microstructure to the macro-scale material response might be achieved, with the FFNN replacing the RVE or constitutive law to describe the response at each material point.
Neural networks have been studied with the goal of integration with multiscale methods. In such cases, some parts of numerical simulations are replaced with neural networks to better utilize their merits. With the development of numerical methods and the increasing interest in multiscale simulation, integration of multiscale simulation and neural networks is continuously developing.
In design optimization, one might expect there to be many calls to the material subroutine (e.g., one for each element for each load for each design iteration). Thus running ROMs for RVEs might still be time consuming, compared to a model predicting RVE stress responses given a strain state within milliseconds, or a model providing strain state given microstructure stress contours within milliseconds. To tackle this issue, we propose replacing the ROMs by neural networks trained on the RVE responses computed with SCA for micro-stress and macro-stress. These networks preserve microstructure information and are engineered to:
-
- Predict macro (homogenized) stress or micro (local) stress given a macro-strain using an FFNN for one RVE. In this case, the FFNN plays the role of traditional material constitutive equations and homogenization (to compute the macro-stress). We denote these FFFNmicro and FFNN, respectively
- Predict macro-strain given any micro-stress distributions using a convolutional neural network (CNN). This is the inverse of the FFNN. The input is a stress distribution within an RVE domain. The output is the macro strain loading applied to this RVE (the boundary conditions). We denote this CNN.
- Compute RVE damage by comparing local von Mises stress to a stress threshold. A CNN is trained to identify the onset of damage within an RVE. In this case, the CNN acts as a classifier, which identifies whether or not the applied macro-strain will cause microstructural damage. We denote this CNNclassify.
Database Generation for Machine Learning Using SCA
The ROMs presented in this example can be used to generate a database for machine learning. In this case, we are using SCA with 8 clusters. In general, such databases contain NT training samples and NV validation samples. For the RVE model, a database with NT=1000 strain-stress pairs is computed for a nonlinear elastic material by randomly sampling 200 terminal states with four sub-loading steps each, as shown in
Feed Forward Neural Networks
In order to illustrate the structure of feed forward neural networks (FFNNs), a simplified one dimensional example is presented. In linear elasticity, stress is related to strain by the material stiffness; this can be generically defined as a mapping. The overall structure of a neural network can also be described as a mapping, i.e.:
where FFNN is the FFNN that uses strain state ε as input, and generates stress state σ as the output. The structure of a simple FFNN is shown in panel (a) of
ai=1l=1=ε(input layer) (1-18)
The three neurons in the hidden layer take in this value, and each take on the value given by:
aj=1,2,3l=2=(Σi=11Wijl=2ail=1+bjl=2)(hidden layer) (1-19)
where is an activation function. In the training part, this example uses Sigmoid function:
and each neuron is computed using a different weight Wijl=2 and bias bjl=2, where i is the neuron in the previous layer (in this case, the input later) and j is the neuron in the current layer (in this case, the hidden layer). Finally, the overall response—the stress—is given by:
σpredicted=ak=1l=3=Σj=13Wjkl=3ajl=2+bkl=3 (output layer) (1-20)
The combination of all the W and b terms, as well as the activation function, work as the fitting factors in a regression analysis. The activation function is fixed for all neurons, and is used to condition the weighting factors. For the constitutive model outlined above, the overall results of the weights, bias and activation function would perfectly match the elastic modulus. If we only consider weights and the activation function is just a linear mapping, the function of each neuron in hidden layer is given explicitly in panel (b) of
The physical interpretation of individual neurons is more complicated for non-linear responses, although the overall idea is the same. In this formulation, strain path dependence (i.e., plasticity) is impossible to capture, and the net result is a response map where one might think of the collection of neurons (weights, biases and activation functions) as an “instantaneous elastic modulus,” or the slope of a line that relates strain to stress at the current point in strain.
In each layer of a general FFNN, each neuron takes the output value from each neuron in the preceding layer as inputs and gives a single output. This is repeated for each layer. Generalizing Eqs. (1-18), (1-19), and (1-20) to an arbitrary number of layers and neurons per layer results in Eq. (1-21), where the value of the jth neuron in layer l for the sth sample (either a training sample or prediction) can be expressed as:
where the final layer gives the estimated stress:
σprediceted,jM,s=ajN
The outputs ajl,s of each layer possesses similar physical meaning as explained for the 1D case. The input strain components are represented by l=1, ajl,s for the sth sample, and l=2, . . . , NL−1, ajl,s represents an estimate of the the nonlinear stress responses of the microstructure. The activation functions and weights of layer l=2, . . . , NL−1 play a roughly similar role to the classic definition of the tangent modulus in solid mechanics. The hidden layers take in strain components and produce an estimate of the non-linear stress responses. During the training process, the non-linear relationship between stress and strain is gradually “learned” by those hidden layers. In the last layer, l=NL, ajl,s represents the predicted stress components. The predicted stress components are produced through the regression operation in the output layer, as shown in Eq. (1-21). The weights and bias of the output layer correct the prediction generated from hidden layers, and produce accurate nonlinear stress responses. In order to make the concept clear, one might consider the hidden layer as unitless values operating on intermediate strain values, while the units of W and b in output layer are those of stress (e.g., MPa in the example problem). The FFNN can learn nonlinear elastic material behaviors due to following two key factors: 1) hidden layers approximate the material nonlinear elastic responses as a traditional constitutive model would do, as described in Eq. (1-17); 2) the output layer corrects the predicted nonlinear responses for improved prediction accuracy.
Feed Forward Neural Network with Database Generated by SCA
In this case study, an FFNN is trained with data generated using SCA. Using the same microstructure as given in
Training
The training procedure for an FFNN can be reformulated as an optimization problem. We define the loss function (or cost function) as Mean Square Error (MSE) for the estimated stress and stress computed by RVE using SCA. Assuming one hidden layer, the optimization formulation is given by:
By finding the optimal values for Wijl=2, bjl=2, Wjkl=3, and bkl=3, MSE is reduced. Note that only training data is used in this process, hence s=1, 2, . . . , NT.
Usually, the MSE gradually decreases with each training step. To ensure the trained neural network is general enough for all possible input states, some data points called verification data are used to monitor trends in the error. The minimization iterations terminates before the error of the verification data starts to increase. This ensures the neural network is able to provide certain extrapolating capability for data points that are not within the training set.
The FFNN described above was trained on the database. In this case, an FFNN with one hidden layer and 50 neurons was chosen. In the training procedure, 1,000 samples are used to train the neural network. The Levenberg-Marquardt optimization algorithm is used to reduce the MSE.
Prediction
After the training process, a fast evaluation of the stress state during a monotonic loading process was performed. The FFNN used for this predicts the macroscale stress tensor for a given macroscale strain tensor following Eq. (1-24); similarly, Eq. (1-25) is used to predict the local (cluster-wise) stress tensors given a macroscale strain tensor.
σM,s(FFNN)=σkl=N
σs(FFNN)(X)=σkl=N
To demonstrate validation of the macroscale FFNN, the l2 norm is used to measure the difference between the overall stress predicted by Eq. (1-24) and the homogenized SCA results for each sample in the validation data set, as computed by:
DifferenceFFNNs=∥FFNN(εiM,s)−σM,s(SCA)∥2·,s=1,2, . . . ,NV, (1-26)
To validate the trained FFNN, another 30 final strain states (unknown during the training) with five load steps each (including the final state) were selected.
This case study illustrates a convenient workflow that used the reduced order modeling approach to generate a rich microstructure response database for training an FFNN, which is then used for generating fast predictions of the RVE responses. Note that although the validation for the FFNN for the homogenized stress-strain relationship is given in detail here, a similar process has been used for the relationship between macroscale loading and microscale (cluster-wise, or local) stresses, as given in Eq. (1-25).
We propose that the FFNNs shown here can be used in a design optimization process, such as topology optimization or microstructural design, where a fast and accurate material responses prediction is desired. However, note that the material is non-linear elastic and/or under monotonic loading. If plasticity and loading/unloading are considered, a different FFNN setup or a different neural network may be required. A speed comparison of running 150 samples with SCA and FFNN is given in Table 1-5, where the speedup for online prediction of σM is 10000 for the FFNN over SCA. This idea will be explored further using both the FFNN predictions (Eq. (1-24) and Eq. (1-25)) and convolutional neural networks.
Convolutional networks or convolutional neural networks (CNNs) are widely used in fields such as image recognition and feature identification. The term “convolutional” refers to the linear mathematical operation and indicates that the convolution operation is implemented in at least one layer of the network rather than conventional matrix multiplication. This is a biologically inspired model used to handle known grid-like topology data such as time series (1D grid of samples at successive time intervals) or image data (2D grid of pixels). Convolution neural networks have been implemented in material science and multiscale modeling to analyze the microstructure properties where the input data are microstructure images. Extracting material information through microstructure images, a ubiquitous data type in materials science, has proven to be a promising application of CNNs. For example, Lubbers et al. implemented a CNN based on the distribution of texture images for unsupervised detection of low-dimensional structures. Scanning electron microscope (SEM) images are frequently used in materials science to distinguish between categories of materials. Such image datasets can be classified with a single feature or with multiply features using CNNs. Some studies have implemented CNN to featurize SEM images over a single set of data. However, a scalable and a generalizable feature should be used to facilitate widespread applicability of the CNN. Ling et al. analyzed the generalizability and interpretability of CNN-based featurization methods for SEM images, and found that mean texture featurization is generally useful in such cases, although sometimes feature-free CNN procedures are appealing as well.
The application of CNNs is not limited to images. Cang et al. established a CNN approach to predict the physical properties of a heterogeneous material, replacing standard statistical or micromechanical modeling techniques. The generated scheme is applicable to systems with a highly non-linear mapping based on high dimensional microstructure. They have implemented a convolutional network to quantify material morphology followed by another convolution network to predict the material properties given the microstructure. This can also be done in 3D, for complex materials and responses.
A CNN model includes several basic unit operations: padding, convolution, pooling, and a feed forward neural network (FFNN). The structure of an example 1D CNN is shown in
The input is a series of stress values, i.e., a 1D problem. The 1D CNN includes several loops of padding, convolution, and pooling. For a specific loop iteration η, a padding procedure adds zeros around boundaries, to ensure that the post-convolution dimension is the same as the input dimension. After padding, several kernel functions will be used to approximate the discrete convolution operator given by:
{tilde over (σ)}xκ,η=Σξ=−(L
where σx+ξpadded,η is the input, wξκ,η is the κth kernel function, and bκ,η is the bias, for ηth convolution process and η=1, 2, . . . , Nconv. The size of the kernel function is Lconv. A summary of all of the notation used in this section is given in Table 1-6. The convolution operation can be regarded as a feature identification operation. After padding, a pooling layer will decrease the dimension of inputs, and extract the most important features from the post-convolution data. A one dimensional max pooling equation is given by
{circumflex over (σ)}αmax,κ,η=MAX({tilde over (σ)}ξκ,η,ξ∈[(α−1)Lpooling+1,αLpooling])
α=1,2, . . . ,Npoolingη (1-28)
where {circumflex over (σ)}max,κ,η is the output value, {tilde over (σ)}ξκ,η is the input value, and Lpooling is the length of the pooling window. Max pooling extracts the maximal value from the window, but other pooling operations might also be used. In this case we surmise that, to predict remote strains, the maximum stress values might be telling. Padding, convolution, and pooling may be repeated for Nconv times. The value will be transferred to a fully connected FFNN, such as that illustrated in the previous section.
Similar to 1D convolution, the convolution operation applies a kernel function over the stress contours and generates a new feature map with the same resolution as the initial stress contours. The extension of the convolution operation to two dimensions for ηth convolution process is shown in Eq. (1-29).
{tilde over (σ)}α,βκ,η=Σζ=1N
where κ is the kernel ID of the convolution layer and goes from one to Nkernel. The size of the convolution kernel in dimension 1 and 2 are given by Xconv and Yconv, and are both odd numbers. The counting indices in the kernel in dimension 1 and 2 are defined ξ as and ψ, respectively. The number of stress components is Nfeature; for this 2D example, Nfeature=3. By applying the kernel to each element in each the input stress array, a complete feature map will be generated.
To define a nonlinear relationship between the input and output using the CNN, a nonlinear activation function is often used. In some cases, this is a Rectified Linear Unit (ReLU) layer, which is applied to all feature maps generated from the convolution operation. For simplicity of illustration, this step is not shown in the equations and figures.
A pooling layer is applied to all feature maps after the ReLU layer to compress the resolution of the data in the X and Y directions. Different pooling operations might be used; in this example, we selected max pooling. The max pooling operator divides the feature map into many subset regions, and selects the maximum value from each region to use as the value in the compressed feature map; generalizing from Eq. (1-28), for the nth convolution process, this can be written as:
{circumflex over (σ)}α,βmax,κ,η=MAX({tilde over (σ)}ξ,ψκ,ηξ∈[(α−1)Xpooling+1,αXpooling],ψ∈[(β−1)Ypooling+1,βYpooling])
α=1,2, . . . ,β=1,2, . . . ,NY
After the final pooling operation, all compressed feature maps are converted into a single vector through a flattening operation. The flattened array is then used as the input of the FFNN for regression to compute the corresponding strain. Further details of the CNN method and implementation can be found in literature cited in the beginning of this section.
Convolutional Neural Network for Boundary Condition Identification with Database Generated by SCA
Using the CNN illustrated in
Training
This CNN was trained on the same database of 1000 SCA results as was the FFNN. For the CNN, the input is the micro-scale stress at each point (voxel) of the RVE, given by the cluster-wise results of SCA, σ(α,β) Each point in the RVE contains the three 2D stress components, like the RGB channels used for images. The output of the CNN is the macroscale strain εM that was applied as the loading conditions and caused the observed stresses. The training can be written as an optimization problem, as given in Eq. (1-31). The equations for training a CNN with padding, convolution, and pooling layers repeated Nconv times is given by:
The term pcpη(σs(α,β)) is used to simplify the notation by combining the nested operations shown in
pcpη(σs(α,β))=poolingη(convη(paddingη(σs(α,β)))),
which includes terms for padding, convolution, and pooling. The remaining terms in the training problem are define as follows. The weight and bias in the FFNN are Wmnl and bml, and Wξ,ψζ,κ,η, bκ,η are the weights and biases in the convolution operations. The ground truth is ε+M,s and the estimate is εM,s. The number of training samples is defined as NT, and NN(l=NL) is the number of neurons in output layer. In this case, NN(l=NL) is three. The sampling is indexed by s.
The inputs to the CNN are the three stress arrays corresponding to the components of stress given by σs(α,β). The outputs are three strain values εjM,s(j=1,2,3). Just as for the FFNN, the mean squared error (MSE) is reduced gradually step by step using one of several optimization algorithms.
Prediction
Just as with the FFNN, we can define the function CNN(σs(α,β)) that describes the operation performed by the trained CNN, in this case
CNN(σs(α,β))=εM,s (1-32)
In
DifferenceCNNs=∥CNN(σs(α,β))−εM,s(SCA)∥2.,s=1,2, . . . ,NV (1-33)
Convolutional Neural Network for Classification with Database Generated by SCA
The application of a CNN to material microstructure predictions is not limited to the sample problem shown above. Another example use of a CNN is as a microstructure classifier; this is similar to its common application in image classification. By using a microstructure mesh and an applied strain on the microstructure as the input, a CNN can be trained to predict whether the microstructure will become damaged.
Training
The training procedure for the classification CNN is given by
The output of the classification CNN, ds, is a binary indicator: 0 for non-damaged, 1 for damaged. Since the output is a binary value, the objective function is now defined in terms of the cross entropy between the truth value d*s and the predicted value ds. Cross entropy, or log loss, is widely used to measure the performance of a classification model. The CNN in this section is trained on the data extracted from the database used for the previous NNs. The database for training the classifier includes pairs of micro stress distributions and damage indicators. In this case, a critical-stress-based damage criterion is used to decide whether an RVE is damaged or not: if any von Mises stress σ(X) exceeds a critical von Mises stress σ* the RVE is considered damaged.
Prediction
Once trained, the CNN can predict whether the applied loading will result in microstructure damage without carrying out the full RVE simulation:
Such a CNN will be used in a microstructure-based topology optimization example to illustrate the effect of a microstructural damage constraint on the optimized structure.
Microstructure-Based, Multiscale Topology Optimization Using Neural NetworksIn this section, we will illustrate how FFNNs and CNNs might be used in topology optimization to achieve microstructure-based design. This differentiates the current approach from classical topology optimization which typically uses simple constitutive relationships. As explained above, we propose to compress the RVE response database into an FFNN for forward prediction of RVE stress responses, where it will act similarly to a traditional homogenized constitutive model. However, because a database of RVE responses is used, no functional form of the homogenization is required. The RVE microstructure damage responses is represented with a trained FFNN+CNN; this introduces microstructure damage, linked with the applied strain state of the RVE. By using well-trained FFNNs and CNNs, two different optimization problems are defined as below:
1. Topology optimization with a material constitutive law extracted from an FFNN trained on the stress-strain relationship of a given microstructure. In this case, a non-linear material behavior during topology optimization is used to achieve a design that is durable under extreme loading conditions where the material response enters the non-linear region.
2. Topology optimization with constraints defined by the FFNN+CNN framework to identify microstructure damage and thereby design durable (damage aware) structures, using the CNN. In this case, the microstructure damage acts as an extra constraint to the topology optimization formulation to achieve a design that alleviates or avoids possible local microstructure damage.
In these two example problems, the design zone is described with a 60×30 mesh of rectangular, linear elements. Each element is a 1 cm×1 cm square. The elastic material properties are from the homogenized SCA results: E=200 MPa and v=0.27. An approximated non-linear elastic material response is extracted from the RVE simulation obtained from Sect. 3, as described in the following sections. Note that while plasticity is used, our FFNN is only valid for a non-linear elastic approximation of the material response. A point load of 75 N is applied at right bottom corner. The desired volume fraction of the optimized part is set as 0.35 of the original design zone. For the second case with damage the critical stress is defined as 0.7 MPa.
Topology Optimization with FFNN
The formulation of microstructure sensitive topology optimization with an FFNN is shown in Eq. (1-36). The objective function is defined as the overall strain energy of the structure, Φ, which is to be minimized. A subtle but important difference in the present example in this work is that the FFNN is used to replace the usual linear elastic material response with a nonlinear one. By using a nonlinear material responses database depicted in above, a new avenue for data-driven material and structure design is illustrated. This is depicted graphically in
where f is the body force, t is the applied traction on the boundary of the design zone, and u is the local displacement in the design zone. U* is the admissible displacement field, and SM defines the boundaries of the macrostructure (the design region). σ(XM) and σ(XM) are macro stress and strain. The density for each macro mesh is ρ(XM). The desired volume of material remaining in the design zone is V* and defined as 0.35, and V(ρ(XM)) is the optimized volume in the design zone. The density of location XM is ρ(XM) in the design zone, and the homogenized stress is σ(XM), defined as the product of ρ(XM) and FFNN(ε(XM). Here, FFNN represents a trained FFNN that will generate stress predictions based on given strain input ε(XM), as defined in Eq. (1-17)
In this case, the FFNN is used to approximate the RVE responses. Hence, σM is directly approximated by the FFNN. In truth, constraints of the software used for optimization require a functional description of the behavior (this limitation will be addressed in future work), thus the effective von Mises RVE stress versus effective strain curve is approximated using an exponential function:
The optimized beam structures are illustrated in panels (a)-(b) of
Topology Optimization with Constraints Defined by FFNN+CNN
Topology optimization results may have high stress concentration zones. If not treated properly, the concentration zones may cause unexpected damage and affect the function of the structure. A traditional way to address this is to add stress or strain constraints to optimization. Previous authors have focused on optimization with stress concentration and singularities. In these previous studies, most damage criteria are only related to the macroscopic material model and do not consider micro structure mechanical behavior. In this work, an FFNN+CNN framework trained by the database generated with SCA provides a microstructure-based prediction of damage for the damage criterion.
In order to do this, the FFNN defined in Eq. (1-25) that performs the operation σ(X)=FFNNmicro(εM) is used. As mentioned above, this predicts local, rather than homogenized, stresses in the RVE. These local stress distributions serve as input to the CNN, which indicates whether or not damage has occurred. The optimization formulations are thus:
where Φ=Σε=1n(ρ(XM))pueTk0ue is the compliance of the overall structure, N is the number of elements, p is the penalization power (typically p=3), ue is the displacement for each element, and k0 is the element stiffness. The averaged strain response of the RVE is εM. The local stresses within the RVE are defined as σ, and are calculated with a trained FFNN FFNNmicro. Other variables are the same as defined above for the FFNN-only optimization. For any X in RVE Ωm(X), the output value of CNNclassify should be 0, which represents a non-damaged state.
Since the FFNN+CNN database only gives a criterion, sensitivity analysis is not preferred in this case. The algorithm here follows a refined optimally criteria (OC) method. The optimization program structure is based on the 99-line topology optimization code (and thus the problem is similar, though the implementation is not identical to the FFNN-based optimization above). During the density update, for each element, three strain components will be passed to the FFNN+CNN. The FFNN+CNN will determine whether the current strain state is acceptable by assessing the local microstructural response. This is shown schematically in
Similarly to the FFNN example, panel (a) of
A summary of the design variables and important parameters related to the optimization is provided in Table 1-8. The simple examples above show the potential application of an FFNN+CNN database generated by clustering reduce order methods. However, the optimization is just based on an artificially defined optimality criterion. The algorithm may not be stable for all kinds of problems. In the future, we should study the sensitivity and singularity of constraints based on an FFNN+CNN database.
Two challenges with current approaches to machine learning methods in the mechanical science of materials are: (1) the database generation time and effort are extensive, and (2) the application of machine learning is not well developed or understood by the community. This study covers several different topics related to these challenges:
-
- We have outlined, related, and compared three different clustering-discretization methods (SCA, VCA, and FCA) that rely on unsupervised learning for order reduction and the solution of mechanistic governing equations for prediction.
- One of these methods, SCA, was used to develop an example material behavior database suitable for training neural networks. This approach to database development substantially reduces the effort required to acquire the information upon which neural networks may be trained.
- The basic operations, and how these combine to make predictions of mechanical responses, in an FFNN were outlined. This includes the role of weights, biases and activation functions as well as the description of the training stage of the neural network as a minimization problem using notation common within the mechanical sciences.
- A similar description of convolutional neural networks was developed for two different possible applications: (1) to solve inverse problems where the boundary conditions need to be identified from a known stress distribution and (2) as a classifier to identify if damage will occur within a microstructure given a known stress distribution.
- Two microstructure-sensitive topology optimizations are demonstrated. In the first case, the material response at the microscale derived from the FFNN results, and used to perform design against a load that causes the material to behave in a non-linear elastic way. In the second case, a material damage constrain is added to the optimization, where the CNN is used to identify if damage has occur on the microscale and penalize the design accordingly.
In short, we have provided methods to more rapidly produce the data needed to train neural networks, developed further insight into the working of neural networks from a mechanical sciences perspective, and highlighted the potential for these methods to enhance practical design tasks. The database of responses made with SCA, codes used for training and prediction with the neural networks, and the topology optimization codes are developed. This will encourage the use of data science and machine learning as a tool for mechanistic analysis, rather than simple as an unknown black-box operator.
Several areas where further investigation might be useful have already been noted:
-
- Further development of clustering methods to represent large deformations, better capture anisotropic behavior or behavior that changes due to loading conditions, and even refine clusters during the prediction stage might be promising. The development of contact or self-contact formulations applicable to clustering discretization methods would aid in generality.
- Formulations of clustering discretization solutions applicable to the component scale (rather that only the RVE scale), and the extension of concurrent multiscale solutions that use clustering discretization at multiple scales are currently under development.
- For neural networks, methods to include history-dependence (e.g., plasticity) are currently an active area. Including physics in the neural network directly is another developing area, e.g., with physics-informed neural networks (PINNs).
- For optimization, sensitivity analysis for topology optimization with FFNN and CNN should be further developed. More flexible software to support this would also be desirable.
- Multiscale topology optimization with various material microstructure databases is still a developing area. The approach outlined here may be a promising method to simultaneously optimize topology and microstructure given sufficient constraints.
- The “data-driven” component of these methods (both clustering-based discretization and neural networks) are not restricted to the use of computational data. Information from other sources, e.g., experimental sensor data and images, could be included if it is available. If mixed data streams are used extra care in data representation would be required.
- Continuous validation and verification studies will help make these methods robust and reliable.
In this example, a mechanical science of materials, based on data science, is formulated to predict process-structure-property-performance relationships. Sampling techniques are used to build a training database, which is then compressed using unsupervised learning methods, and finally used to generate predictions by means of mechanistic equations. The method presented in this example relies on an a priori deterministic sampling of the solution space, a K-means clustering method, and a mechanistic Lippmann-Schwinger equation solved using a self-consistent scheme. This method is formulated in a finite strain setting in order to model the large plastic strains that develop during metal forming processes. An efficient implementation of an inclusion fragmentation model is introduced in order to model this micromechanism in a clustered discretization. With the addition of a fatigue strength prediction method also based on data science, process-structure-property-performance relationships can be predicted in the case of cold-drawn NiTi tubes.
Increasing research efforts in fine scale experiments and numerical modeling in recent decades have progressively led to a change in modeling approaches in mechanics and materials science. Empirical and phenomenological material laws that were previously used to model the nonlinear mechanical response of structures and materials are being replaced by microstructure-based mechanistic material laws. Under arbitrary loading conditions the number of microstructure observations and conditions to be modeled make the effort required for such an endeavor untenable for practical applications. The appeal of data science and in particular machine learning is a drastic reduction in the number of microstructure observations and simulations required to generate predictive material laws. There is hence a great interest in a data science theory for mechanical science of materials that could generate predictive material laws from a predefined database of experimental and numerical results.
Multiple approaches have been proposed in the literature to reach this goal, generally summarized by three steps: (1) collecting data using high-fidelity experiments and simulations to build a training database; (2) compressing the training database using unsupervised learning methods for dimension reduction; and (3) generating predictions using supervised learning methods or mechanistic equations on the compressed training database and optionally cross-validating those predictions using a testing database with new high-fidelity experiments and simulations.
The training database can be generated using, e.g., random sampling, Gaussian processes, or Sobol sequences. Because those sampling methods may require a lot of data points to cover the solution space sufficiently for accurate predictions, deterministic sampling methods have been considered by some authors. For instance, instead of considering a large number of arbitrary, random loading conditions for the training database, only 6 orthogonal loading conditions of small amplitude were proved to be sufficient for small strain elastoplastic analysis.
Compression of the training database can be achieved using various unsupervised learning methods for dimension reduction, such as Proper Orthogonal Decomposition (POD), K-means clustering and self-organizing maps. The choice of compression method has a significant importance as it defines the discretization of mechanistic equations that will be solved in the prediction stage. POD leads to shape functions of global support, while clustering methods ensure a cluster-wise discretization.
As a result of data compression, the complexity of high-fidelity experiments and simulations that were used to build the training database is encapsulated in a few degrees of freedom. In order to solve for those degrees of freedom and predict mechanical response at arbitrary loading conditions, mechanistic equations have to be reformulated in terms of the reduced degrees of freedom. This new formulation of mechanistic equations is usually called a reduced order model, although this denomination encompasses approaches such as proper generalized decomposition which do not rely on data science.
Additionally, some approaches couple the data compression and mechanistic prediction steps to improve the reduced order model during the simulation. Some supervised learning methods have been applied directly to the training database with a built-in compression stage. This is the case for instance for artificial neural networks, which have been applied in the literature to predict mechanical properties of materials as a function of their microstructural characteristics.
In this exemplary study, we disclose a data science mechanistic approach for ductile materials by Self-consistent Clustering Analysis (SCA), a data-driven mechanistic material modeling theory developed for small strain elastoplastic materials. SCA relies on data compression through clustering and mechanistic prediction through micromechanics and homogenization theory.
In this study, the mechanistic equations that SCA relies on to make predictions are reformulated for finite strain elastoplastic materials. Numerical convergence of this new method is verified. This new formulation of SCA enables the prediction of the nucleation of voids in ductile materials by debonding and fragmentation of inclusions at the scale of their microstructure, which is shown in
Microstructure-based material modeling requires the definition of an idealistic or statistically representative microstructure realization, called RVE. Homogenized material laws can be computed by analytically or numerically solving a boundary value problem for the response of that RVE. For arbitrary microstructure geometries and complex behavior of microstructure constituents (plasticity, fracture), numerical methods such as the Finite Element (FE) method or Fast Fourier Transform (FFT)-based numerical methods are required.
The microstructures that will be studied in the present paper correspond to ductile materials and feature one or multiple inclusions and voids embedded in a matrix, as shown in
The FE method can be used with any structured or unstructured FE mesh of the undeformed RVE domain Ω0m (the superscript m means microscopic), while FFT-based methods require structured voxel meshes such as that shown in
um(X)≈Σn=1N
where Nnodes is the number of nodes in the FE mesh, um,n is the displacement vector at node n, and Nn is the FE shape function at node n. In FFT-based numerical methods, discrete equations are written for the deformation gradient tensor field Fm=I+∇Xum, which is approximated voxel-wise as
Fm(X)≈Σn=1N
where Nvoxels is the number of voxels, Fm,n is the deformation gradient tensor in voxel n, and χn(X) is the characteristic function which is equal to 1 if X is inside voxel n and zero otherwise.
For a given microstructure, the displacement field um and the deformation gradient field Fm depend on boundary conditions applied to the RVE. In the present work, um will be decomposed over the RVE domain Ω0m into a linear part and a periodic part. As a result, Fm will be decomposed into a constant part FM (the superscript M means Macroscopic) and a periodic part with zero average over Ω0m. These assumptions correspond to first order homogenization theory.
Data science is used in the mechanical science of materials to predict either um or Fm as a function of FM. As stated in the introduction, the first step is to generate data through simulations. Simulation results in the training database will have large dimensions due to dependence of approximations in Eqs. (2-1) and (2-2) on either the number of nodes or the number of voxels. Data compression is necessary to obtain new approximations with reduced dimensions.
Data Compression
Dimension reduction can be achieved using various methods among which POD and clustering are presented and compared in the following.
The general formulation of data science approaches that is developed herein is only relevant if the complexity of simulations that are to be conducted in the prediction stage is at least one order superior to the complexity of simulations required in the training and data compression stage. The relevance of data science approaches also depends on the amount of work that can be transferred out of the prediction stage. This will be evidenced in the following in the case of POD and clustering based data science approaches for mechanical science of materials.
Data compression in the case of POD consists in replacing the large number of local FE shape functions (Nn)n=1 N
uim(X)≈Σk=1Kuim,kWik(X),X∈Ω0, (2-3)
where the modes are discretized at mesh nodes as
Wik(X)=Σn=1N
Simulations in the prediction stage can then be conducted using a standard FE weak form but replacing approximation of Eqs. (2-1) by (2-3). It is interesting to see that once the modes are computed in the data compression stage, Eq. (2-4) can be precomputed at integration points of the FE mesh in the same way that FE shape functions are usually precomputed in FE codes.
However, if the material is heterogeneous, or if it has a nonlinear behavior that leads to heterogeneous deformations, material integration still has to be solved at each integration point. Consequently, in the POD method, the complexity of material integration is not reduced. Additionally, the stiffness matrix associated to the FE weak form is dense because of the form of Eq. (2-4), and hence its solution using direct or iterative solvers has a cubic worst-case complexity instead of quadratic. However, this complexity depends on K instead of Nnodes, with K>>Nnodes, and is therefore drastically reduced by POD.
Data compression in the case of SCA follows a different approach, where the initial numerical method is FFT-based. The large number of voxels is to be replaced by K<<Nvoxels mutually-exclusive groups of voxels that are called clusters and that span the entire RVE domain. Clusters can be constructed using various clustering techniques such as K-means clustering, or self-organizing maps. Examples of data that can be used for clustering are given below. The resulting approximation replacing Eq. (2-2) is
Fm(X)≈Σk=1KFm,kχk(X),X∈Ω0m, (2-5)
where Fm,k is the cluster-wise constant deformation gradient tensor in cluster k, and χk(X) is the characteristic function which is equal to 1 if X is inside any voxel of cluster k, and zero otherwise. Because in the FFT-based numerical method the degrees of freedom are directly the voxel-wise constant deformation gradients, interpolation and integration are carried out at the same points. Thus, clustering degrees of freedom directly leads to a reduction of the number of degrees of freedom and of material integration complexity. In fact, in SCA, the complexity of all operations conducted in the prediction stage only depends on the number of clusters K, with the most expensive operation being, similarly to POD, the solution of a dense linear system. The latter results from the reformulation and discretization of the Cauchy equation into the discrete Lippmann-Schwinger equation. These steps are described in the following in the finite strain case following recent work on finite strain FFT-based numerical methods and then integrating it into SCA.
Continuous Lippmann-Schwinger Equation
As mentioned previously, first order homogenization consists in defining the deformation gradient tensor field in the RVE Fm as the addition of the macroscopic (homogeneous) deformation gradient FM and a microscopic (heterogeneous) fluctuation. Hill's lemma can be used to define the macroscopic first Piola-Kirchhoff stress tensor PM as the average of the microscopic one
Hill's lemma requires (Fm−FM) to verify compatibility, i.e., to derive from a periodic displacement field, and Fm to verify equilibrium, i.e., to be the solution of the Cauchy equation
∇X·Pm(Fm(X))=0,X∈Ω0m. (2-6)
It can be shown that Eq. (2-6) is equivalent to the Lippmann-Schwinger equation
The fourth rank tensor 0 is the stiffness tensor associated to an isotropic linear elastic reference material. The far field deformation gradient tensor F0 and the periodic Green's operator 0 are determined below. The latter maps any tensor field τm to a compatible one:
∃u∈(H1(Ω0m))3,u periodic on Ω0m,−0*τm=∇xu, (2-8)
where H1(Ω0m) is the Sobolev space of square-integrable functions whose weak derivatives are also square-integrable.
The combination of Eqs. (2-7) and (2-8) yields a microscopic deformation gradient tensor Fm that verifies compatibility and a first Piola-Kirchhoff stress tensor Pm that verifies equilibrium.
Discrete Lippmann-Schwinger Equation
SCA includes solving Eq. (2-7) cluster-wise instead of voxel-wise. This choice is inspired from micromechanics and in particular Transformation Field Analysis.
As a result of the training stage, the RVE domain Ω0m is discretized into K subsets (Ω0m,k)k=1 K. The degrees of freedom in the FFT-based numerical method are associated with the microscopic deformation gradient Fm. In SCA, Fm is discretized by a cluster-wise constant approximation (Fm,k)k=1 K. As a consequence, the microscopic first Piola-Kirchhoff stress tensor is also approximated cluster-wise (Pm,k)k=1 K, and Eq. (2-7) can be discretized as
Fm,k=−Σk′=1 K0,k,k′:(Pm,k′−0:Fm,k′)+F0,k=1K (2-9)
where 0 is the interaction tensor defined by
The characteristic functions χk and χk′ are equal to 1 in, respectively, clusters k and k′, and 0 elsewhere. In the FFT-based numerical method, the periodic Green's operator 0 depends on 0, and is known in closed form in Fourier space. Because 0 is related to an isotropic linear elastic reference material, 0 can be expressed in Fourier space as a function of the reference Lamé parameters λ0 and μ0. It is then obtained in real space by using the inverse FFT. In particular, Eq. (2-10) can be written in the form
The detailed expressions of f1, f2, 1 and 2 can be found among others. Drastic computational cost reduction is enabled by SCA thanks to a reduced number of degrees of freedom by clustering, and by the fact that 1 and 2 can be precomputed in the training stage. Therefore, neither FFTs nor inverse FFTs are computed in the prediction stage, even if the reference material is changing.
In the present work, mixed boundary conditions are coupled to Eq. (2-9). Some components Fi,jm of the average of the microscopic deformation gradient are set equal to their macroscopic counterparts from Fi,jM, and some other components Pi,jm of the average of the microscopic first Piola-Kirchhoff stress tensor are set to zero. This can be done by adding the following conditions to Eq. (2-9):
where ⊂{1,2,3}2 is the set of components for which kinematic conditions are imposed.
As noted, solutions of Eq. (2-9) depend on the choice of reference material. An optimal choice can be computed in the prediction stage by making the reference material consistent with the homogenized material. This means that the far field deformation gradient tensor F0 is an additional unknown that must be solved for in SCA, as opposed to the FFT-based numerical method where F0≡FM. The self-consistent method consists in using a fixed-point iterative method where, at each step, the reference Lamé parameters λ0 and μ0 are changed so that ∥PM−0: (F0−I)∥2 is minimized. This is presented in algorithm form in the Appendix.
To summarize, SCA is based on a voxel-wise discretization of the RVE domain, which is inherited from FFT-based numerical methods. The originality of SCA comes from the use of a K-means clustering algorithm in the training stage to cluster voxels based on a mechanistic a priori clustering criterion computed using a simple sampling of the loading space. This training stage also includes computing all voxel-wise and computationally expensive operations such as FFTs and inverse FFTs.
In the prediction stage, a self-consistent iterative algorithm is used to search for the optimal choice of reference Lamé parameters. At each iteration of this self-consistent loop, matrix assembly operations are accelerated because all voxel-wise operations have been precomputed in the training stage and already reduced to cluster-wise contributions. A Newton-Raphson iterative algorithm must be embedded within each self-consistent iteration as we are considering nonlinear materials, thus the discrete Lippmann-Schwinger equation is linearized.
The outputs from SCA are the microscopic variables' cluster-wise approximations, including the microscopic first Piola-Kirchhoff stress tensor. The latter can be used to predict void nucleation micromechanisms through stress-based fracture criteria as will be described in Sec. 4.
The main advantage of SCA over POD techniques is that in the prediction stage all operations are conducted cluster-wise in SCA instead of voxel-wise, including material integration and even fragmentation modeling.
Numerical ValidationBefore considering a specific application, general ductile materials microstructure are modeled in this section both using a finite strain FFT-based numerical method and finite strain SCA. The goal is to validate the numerical convergence of SCA towards the reference result, and its capability to compute accurate predictions with a reduced complexity, as was shown for the small strain case.
In the present large strain case, hyperelastic behavior is modeled both in the matrix and in the inclusions. A multiplicative von Mises plasticity model with linear isotropic hardening is added to model the nonlinear response of the matrix. Inclusions are assumed to be brittle, which is a common assumption for hard phases in ductile materials. The model ductile material microstructure is shown in panel (a) of
The reference result is computed on the voxel mesh using the finite strain FFT based numerical method under unidirectional tension up to 25% applied logarithmic strain with strain increments of size 0.001. The training database used for K-means clustering simply includes the voxel-wise deformation gradient tensor extracted from the first increment of this reference simulation, which corresponds to a linear elastic analysis. In other words, the simulation used for training is identical in all aspects (mesh, geometry, material properties) to the one conducted in the prediction stage, except for loading conditions because only one strain increment is applied during training. This database can hence be constructed with a negligible computational cost. The K-means algorithm requires a predefined number of clusters, which is varied from k1=1,4,16,64,256 in the matrix phase, and k2=1,1,4,13,26 in the inclusion phase.
The comparison between the reference macroscopic stress/strain curve and those predicted by finite strain SCA is presented in panels (b)-(c) of
To show the influence of clustering on computational complexity, a comparison of computation times is presented in panel (c) of
This interesting advantage of SCA is demonstrated in a second set of simulations where the inclusions in panel (a) of
The comparison between the reference macroscopic stress/strain curve and those predicted by finite strain SCA is presented in panels (a)-(b) of
The main advantage of SCA in terms of computation time is preserved, as shown in panel (b) of
These first simulations with the proposed finite strain SCA formulation are promising as predictions are close to the reference results for a drastically reduced computational complexity. However, this comparison is purely global: only averaged results are being assessed. At large plastic strain or for more complex loading conditions, very heterogeneous and localized strain fields may develop and would require improvements to the method. For instance, adaptive clustering techniques could be considered to update the clustering when plastic localization phenomena occur within the RVE.
Fatigue Strength Prediction of Cold Drawn NiTi TubesThe objective of this section is to demonstrate the utility of a mechanical science of materials based on data science to predict process-structure-property-performance relationships. The chosen application is the prediction of the fatigue strength of cold drawn NiTi tubes as a function of drawing ratio and initial inclusion Aspect Ratio (AR).
NiTi tubes are formed using a series of hot and cold metal forming processes coupled to heat treatments. They are used in the making of, e.g., arterial stents and heart valve frames that undergo a large number of cyclic loads due to heart beats. A critical measure of a tube's performance is hence the fatigue strength of its material, which is itself a function of the material's fatigue life under different applied cyclic strain amplitudes. This fatigue life can be predicted using micromechanical simulations, which depend on the microstructural constitution of NiTi tubes. The latter is a consequence of forming processes, and in particular of the cold drawing step, which is the focus of the following study.
We disclose simulating cold drawing at the microscale by applying biaxial compression to an initially debonded inclusion embedded within a NiTi matrix. In order to predict the evolution of this microstructure during cold drawing, the finite strain SCA theory introduced above is completed with a fragmentation model. The microstructure is extracted at different stages of this drawing model and used as input to a data-driven fatigue life prediction model developed in previous studies and described briefly. The transfer of the microstructure morphology from the drawing model to the fatigue life prediction model requires a displacement reconstruction and microstructure interpolation step that is described. Process-structure-property-performance predictions obtained using this data science approach are presented.
Cold Drawing Model
Using the self-consistent scheme presented above, the discrete Lippmann-Schwinger equation (2-9) can be solved with mixed boundary conditions (2-12) and appropriate constitutive models for the microstructure's constituents. Constitutive models and material parameters are kept identical to those used and reported in Table 2-1, but they are completed by an inclusion fragmentation model. In addition, inclusions are assumed to be initially debonded as a result of high shear stresses developing early at the inclusion/matrix interface during cold drawing, similarly to the cold extrusion process.
Inclusions fragmentation is modeled using a Tresca yield criterion averaged inclusion-wise, following a regularization technique used in a previous work and coupled to a size-effect criterion. The Tresca criterion defines the shear stress σTresca as
σTresca=1/2 max(|σ1−σ2|,|σ2−σ3|,|σ3−σ1) (2-13)
where σ1, σ2, σ3 are the cluster-wise constant principal stresses computed within each cluster of the inclusion phase. The shear stress σTresca is then averaged inclusion-wise, or inclusion fragment-wise if the inclusion has already broken-up, and that averaged value
Fatigue Life Prediction Model
High-cycle fatigue life can be estimated based on local plastic deformation predicted under cyclic loading, where these deformations are computed using the SCA outlined above. Since cyclic strain amplitudes are low, typically below 1% reversed strain, a small strain formulation can be used. An appropriate micro-scale material law-crystal plasticity (CP)—is solved cluster-wise to obtain the cyclic change in plastic shear strain (Δγp) and stress normal (σn) to that strain in the matrix material. The peak value of these variables reach cyclic steady-state rapidly, typically within 3 or 4 loading cycles. From these, a scalar value called the Fatigue Indicating Parameter (FIP) can be defined, which quantifies the fatigue driving force at any location. Here, we adopt the Fatemi-Socie FIP, defined by
This FIP is a critical plane approach based upon the plane of maximum normal stress, σn, normalized by the material yield stress (σy) and a material dependent parameter κ, which controls the influence of normal stress (here κ will be taken as 0.55). When used with SCA and a CP law, the FIP in each cluster is computed cluster-wise from plastic strain and stress across time increments using a simple search to maximize the plastic strain, and thus identify the critical plane. The maximum, saturated FIP (NFIPmax) can be correlated to the number of fatigue incubation cycles (Nine) a microstructure can withstand using a Coffin-Manson parameterization. By computing Ninc for a number of different strain amplitudes to generate a strain-life curve, the fatigue strength (strain amplitude at which a given number of cycles can be reached) can be computed through interpolation or fitting.
The CP material law, calibrated to capture the hardening response of the B2 phase of NiTi, is used. Post-yield hardening is computed with a backstress term that accounts for direct and dynamic hardening. Consistent with the worst-case or nearly worst-case approximation for the material phase, and the lack of crystallographic information from the cold drawing model, the matrix is assumed to be composed of a single grain oriented such that its Schmid factor is maximized. The hard inclusion phase (representing oxides or carbides) is taken to be linear-elastic with an elastic modulus 10 times that of the matrix material. This process follows the CPSCA formulation integrates the CP material law within SCA, and the fatigue prediction method shown for synthetic microstructures. Indeed, the same model parameters are used for CP, FIP and Coffin-Manson as are shown.
Transfer of Microstructure from Cold Drawing Model to Fatigue Life Prediction Model
The fatigue life prediction model relies on CPSCA and hence on an underlying voxel mesh. In order to use this model, the deformed and fragmented microstructures from the drawing simulation results need to be transferred to a new voxel mesh.
The first step is to reconstruct the microscopic displacement vector field, since both FFT-based numerical methods and SCA only solve for the microscopic deformation gradient tensor. This is done using a simple Taylor expansion, or forward finite difference formula, which computes the displacements at all nodes of the voxel mesh from the microscopic deformation gradients inside voxels. This computation starts at some arbitrary corner of the RVE domain where the displacement is fixed to zero, which is in agreement with FE-based linear homogenization implementations.
Once the displacement vector field has been reconstructed, the mesh can be deformed by adding these displacements to node coordinates, as shown in panels (a)-(b) of
Fatigue Strength Predictions
Three different initial conditions for the drawing model were considered, representing possible variability in the quality and degree of processing in the feed stock used for cold drawing. Each condition includes a single, ellipsoidal, debonded inclusion centered in the matrix with a different AR in the load direction. The cross-sectional area (i.e., the minor axis of the ellipsoid) of the initial configuration is kept constant, and the length (in the drawing direction) changed. By doing this, we study the influence of AR (mean curvature) on drawing, fragmentation and subsequent fatigue life. These three different cases are deformed to up to 60% section height reduction by applying biaxial compression with a stress-free third axis. For each case, at every 5% height reduction the procedure outline is performed and the fatigue strength is computed. In order to compute the fatigue strength, fatigue lives at strain amplitudes ranging between 0.36% and 0.54% in increments of 0.06% were computed, and the strain amplitude required to reach 107 cycles was estimated using a power-law fit to that data. Triangle waveforms with loading rate 0.1/s were used throughout to apply fully reversed tension-compression (strain ratio, R=−1) fatigue loading.
The results from this parametric study are shown in
According to the embodiment, a general formulation of data science approaches for mechanical science of materials is presented. This general formulation includes reducing the complexity of process-structure-property-performance prediction methods using unsupervised learning methods on a training database of high-fidelity simulations. This is evidenced in the case of a training database composed of RVE simulations results computed under various loading conditions using the FE method or FFT-based numerical methods. Dimension reduction leads to a compressed RVE model where nodes and voxels are replaced by either modes or clusters depending on the supervised learning method used for data compression.
In the prediction stage, supervised learning methods or mechanistic equations are solved using the compressed RVE. For instance, POD can be used to solve the Cauchy equation using a compressed FE discretization where the complexity depends on the number of modes instead of the number of nodes. Similarly, K-means clustering can be used to solve the Lippmann-Schwinger equation with a complexity depending on the number of clusters instead of the number of voxels. The interesting advantage of this later approach over the former is that it reduces both the complexity of mechanistic equations solution and material integration.
The solution of the Lippmann-Schwinger equation using a clustered discretization requires a self-consistent scheme that has been extended to finite strain elastoplastic materials and coupled to micromechanical void nucleation models in this example. As a result, microstructure evolutions due to large plastic strains have been captured during the cold drawing of NiTi tubes. These microstructure evolutions have been related to the fatigue life and then the fatigue strength of NiTi tubes using a second data science based approach developed in a previous work. Therefore, it has been demonstrated that the proposed data science formulation for mechanical science of materials can predict process-structure-property-performance with a reduced complexity.
Example 3 Predictive Multiscale Modeling for Unidirectional Carbon Fiber Reinforced PolymersThis exemplary study presents a predictive multiscale modeling scheme for Unidirectional (UD) Carbon Fiber Reinforced Polymers (CFRP). A bottom-up modeling procedure is discussed for predicting the performance of UD structures. UD material responses are computed from high fidelity Representative Volume Elements (RVEs). A data-driven Reduced Order Modeling (ROM) approach compresses RVEs into Reduced Order Models so material responses can be computed in a concurrent fashion along with the structural level simulation. The approach presented in this example is validated against experimental data and is believed to provide design guidance for future fiber reinforced polymers development process.
In modern engineering applications, composite materials are receiving growing attention for their extraordinary lightweight and strength. To understand the mechanical performance of various CFRP designs, physical tests are necessary. With computational power continually growing, it is now possible to utilize the Integrated Computational Material Engineering (ICME) approach to virtually evaluate the performance of composite designs and provide design guidance for composite materials. The ICME approach directly integrates microstructure information into property and performance prediction. In the ICME process, the intrinsic relationship between material microstructure and mechanical performance can be captured by a multiscale model which links microstructure to macroscale performance. In this exemplary study, a bottom-up ICME modeling framework for UD CFRP is introduced. The framework incorporates a two-stage ROM technique that enables rapid computation of UD microstructure nonlinear responses during UD CFRP structures simulation. The bottom-up multiscale modeling workflow for UD CFRP is explained in
There has been considerable effort during the past decades in incorporating microstructure information to the macroscale model for performance prediction. For example, one can model all microstructure details into a single model, but this computation is expensive due to the fine mesh required. By deploying multiscale modeling techniques, the macroscale responses are predicted with physical microscale information. A hierarchical multiscale modeling approach as demonstrated by the Multiresolution Continuum Theory is one of them. In the Multiresolution Continuum Theory, the microstructure information is implemented into the macroscale constitutive law to construct a hierarchical multiscale model, which captures microstructural effects, such as the effect of inclusion size. Although this hierarchical multiscale modeling method preserves certain microstructure information, it does not provide explicit microstructure evolution.
To capture microstructure responses, concurrent homogenization can be deployed. One of the concurrent homogenization schemes is the Finite Element square (FE2) approach. In the FE2 approach, the macroscale geometry is discretized with a Finite Element (FE) mesh. Material responses at all integration points are computed by solving RVEs that are discretized with FE meshes. The FE2 approach solves two sets of FE meshes in a concurrent fashion, which results in high computational cost.
To improve computational efficiency, various ROM approaches were developed, such as Transformation Field Analysis, Nonuniform Transformation Field Analysis and Proper Orthogonal Decomposition. A newly proposed data-driven two-stage ROM approach, namely Self-consistent Clustering Analysis (SCA), creates ROMs from high fidelity voxel mesh RVEs and simulates RVEs' elasto-plastic behaviors with an effective RVE damage and failure. In the offline stage, a three-step approach is introduced: 1) data collection, such as collecting strain concentration tensor for each voxel in the RVE; 2) Unsupervised learning, which classifies all voxel elements into different clusters; 3) Generating cluster-wise interaction tensors. In the online prediction stage, cluster-wise strain responses are identified by solving a discretized Lippmann-Schwinger equation. A previous publication shows that the SCA drastically reduces computational expenses, and the accuracy is verified against a high fidelity Direct Numerical Simulation (DNS). Therefore, the three-step two-stage data-driven SCA method is a valuable tool in the ICME process for modeling UD CFRP composites. An SCA flow chart for the solving the Lippmann-Schwinger equation in one embodiment is shown in
In this exemplary example, the ICME modeling framework for UD CFRP structure performance prediction is presented. Under the framework, the macroscale UD model is discretized with an FE mesh. The UD microstructures are characterized by RVEs. RVEs are compressed into UD Reduced Order Models (ROMs) and provide mechanical responses for all integration points on-the-fly. The constitutive response of each constituent, e.g., fiber and matrix, is obtained from physical tests. The UD ROMs capture elastic and elasto-plastic responses of the UD CFRP through computing the RVE responses. Moreover, the UD non-linearity due to matrix plasticity differentiates this work from previous efforts in CFRP ICME modeling. In those previous works, structural analysis is made by assuming linear elastic material responses. The UD ROMs compute RVE responses efficiently enough to replace the phenomenological anisotropic material model and minimized material constants calibration effort.
The methodology developed can be popularized for predicting performance of fiber reinforced polymer in general. Basic materials information, the experimental procedures of the UD CFRP coupon off-axial tensile test and the UD CFRP 3-pt bending test, ICME modeling process for the UD CFRP in details, and results and experimental validation are discussed below.
Materials InformationIn this section, the material properties obtained for A42 carbon fiber from DowAksa and thermoset epoxy resin from Dow chemical are provided. Material properties provided in this section are used through out the modeling work in this exemplary study.
The fibers elastic constants are given in Table 3-1 below. Fibers are assumed to behave elastically. The fiber direction compressive strength is assumed to be one-fourth of the tensile strength, where the tensile strength (TS) and compressive strength (CS) are assumed to be 4200 MPa and 1050 MPa, respectively. Ductile damage model for carbon fibers is assumed, as shown in Eq. (3-1), where dfiber=0 means no damage and dfiber=1 means fiber is damaged.
The epoxy resin elastic constants and tensile and compressive strengths are given in Table 3-2. The tensile and compressive yielding curves are given in panels (a)-(b) of
Material characterizations provide a good understanding of the material of interest. Various testing methods deliver the CFRP properties and provide validation data for the prediction made by the ICME framework. In this work, two types of experiments are identified for examining the predictivity of the proposed UD CFRP ICME framework: (1) UD coupon 10° off-axial tensile test and (2) UD 3-pt bending test.
UD CFRP Coupon 10° 10 Degree Off-Axial Tensile Test:
The UD CFRP coupon specimen is prepared through the following steps. The edge of the UD plaque (with nominal fiber volume fraction of 50%) served as reference for the determination of the angle for cutting the 10° off-axis orientation. The specimen head areas and the tab (woven fiberglass in an epoxy resin) surfaces (with a length of 50 mm) were prepared with grinding paper before applying a commercially available acrylic adhesive. Metallic wires with a diameter of approximately 220 m were used as spacers between specimen and the tab surfaces. The tabbing angle of about 16° was formed by grinding. The specimens were cut with a waterjet system to a nominal width (w) of 12.7 mm with a length of 210 mm for a resulting aspect ratio of 9 in the gauge section of length 1=120 mm, as shown in
The displacement-controlled tensile tests have been conducted on a servo hydraulic testing machines at a quasi-static loading rate of 0.0167 mm/s (1 mm/min). The loading has been induced by the actuator located at the bottom of the test frames. Before each test, a precision steel block had been used for rotational alignment of the actuator to reduce out-of-plane misorientation. The specimens were rigidly hydraulically gripped with anti-rotation collars installed using diamond jaw surfaces and a pressure of 4 MPa. The gripping length on each side ranged between 30 mm to 40 mm. The specimens have been aligned with specimen stops in the grip.
All specimens were prepared for Digital Image Correlation (DIC) measurement with commercially available matte white spray paint, followed by applying matte black spray paint to create a random pattern by the overspray method, as shown in panel (a) of
For stereo-DIC measurement, two 4.1 Mpx (2048 px×2048 px) cameras and 35 mm fixed focal length lenses were used. The image acquisition rate was 2 Hertz. The resolution was 60 m to 70 m per pixel and the size of the dark speckles was about 232 m (3.4 px), measured via the line intersection method. The dark/bright ratio of the sample was nearly one (54:46). For data analysis, the chosen subset size was 15 px and the step size was 6 px. The reference image has been taken at a force F=0 kN while the specimen has only been clamped by the top grip. For analysis, engineering strain has been calculated using a commercially available DIC software package.
UD CFRP Hat-Section Dynamic 3-pt Bending Test:
The UD CFRP hat-section studied in the current work was molded with A42 fibers, provided by DowAksa, and thermoset epoxy resin with fiber volume fraction of 50%. The geometry of the dynamic 3 point bending test sample was shown in panel (a) of
The setup of the dynamic 3 point bending test was shown in panel (c) of
In the ICME framework, the CFRP structure is modeled in a bottom-up fashion, as introduced in
ICME Multiscale Modeling Work Flow
For the illustration of ICME multiscale model setup, the UD CFRP coupon specimen geometry described above is used. The UD CFRP coupon FE model mirrors the real UD CFRP coupon. It replicates the off-axial tensile experiment performed as a one-to-one replica. The FE model contains all 12 UD CFRP laminae. Each of these laminae is modeled as a singular layer of thick shell elements with 2 integration points in the thickness direction (Z direction), as shown in
To illustrate the diversity of the microstructure database, two selected integration points, marked by the red box in
The RVEs illustrated in
UD RVE Modeling
The cured UD CFRP lamina plaque is manufactured by Dow Chemical and the cross-section of the UD CFRP under microscope is shown in
RVEs are used to characterize the UD CFRP microstructure. In general, when the RVE is large enough, typically ten times of the fiber diameter, the random distribution of fiber does not affect the RVE responses significantly.
To generate the UD RVE, the fiber cross-section is simplified as a circular shape with a diameter of 7 μm, and the fiber is assumed to be perfectly straight. The cross-section of the UD RVE can be modeled in a 2D fashion, where the Monte Carlo method is used to pack circles randomly in a 2D domain until the target fiber fraction is met. If part of any fiber lies outside four edges of the RVE, that part will reappear in the opposite directions to ensure periodic distribution of all fibers so the RVE complies with the PBC assumption. The generated 2D mesh is then discretized by square pixels. The algorithm flow chart for generating a 2D mesh of the UD RVE is given in the
The generated UD RVE is given in
To utilize UD RVE for compute stress responses on-the-fly in a macroscale FE model, the ROM technique is used to compress RVEs into a microstructure database. The ROM process is given in following sections.
A DNS of RVE transverse tensile loading is also performed. The DNS is used to verify the efficacy of the ROM, which is supposed to produce accurate results compared with DNS solution. Fiber and matrix properties are following data given above
Reduced Order Modeling of UD RVE
The aforementioned UD RVE contains significant DOFs for a single RVE run due to the fine mesh resolution. To model the UD CFRP structure with UD RVEs, the computational cost is not affordable due to the costly RVE computation. Instead, the ROM technique is applied to the UD RVEs to generate their ROMs. The ROM, in theory, will reduce the computational cost significantly compared to RVE computation. The SCA approach is used to generate the ROMs for the UD RVEs. All necessary derivations for SCA are provided in Appendix A. In this subsection, we will focus on illustrating the 3-step offline stage computation and the online prediction stage.
Offline: The offline stage starts with a high fidelity RVE discretized by a voxel mesh. The strain concentration tensor A(x) links macroscopic strain applied on the RVE to each voxel through the following relationship:
εm(x)=A(x):εM (3-2)
where εm is the microscopic strain at any voxel in the RVE and εM is the applied macroscopic strain of the RVE. A(x) is the well-known strain concentration tensor. Under the Voigt notation, εm and εM are both 1 by 6 vectors. This means A(x) is a 6 by 6 matrix. A(x) can be computed by applying six orthogonal loading conditions where εM has only one non-zero component at a time. This would allow A(x) to be computed one column at a time, and six loading conditions can provide all 36 components of A(x).
Once A(x)s for each voxel are computed, unsupervised learning can be applied to all A(x)s within the RVE to perform clustering. This process will compress the original RVE made of many voxel elements into several clusters. For UD RVE with fiber and matrix phases, fiber and matrix phases are decomposed separately. Number of clusters in fiber and matrix phases are denoted as Kf and Km, respectively. It is convenient to define Kf+Km=K. The clustering process for UD RVE setup one and two, as depicted in
Interaction tensors DIJ must be computed between all cluster pairs. Once interaction tensors are computed, it is possible to solve for cluster-wise strain increments by solving the discretized Lippmann-Schwinger equation.
Online: The online stage involves solving the following residual form in Eq. (3-3). Details of the SCA online stage is given in Appendix A.
rI=−ΔεM+ΔεI+ΣJ=1K[DIJ:(ΔσJ−C0:ΔεJ],I=1,2,3, . . . ,K (3-3)
where rI is the residual of strain increment on each cluster. The residual can be minimized by first linearizing the Eq. (3-3) with respect to ΔεI and solve for ΔεI that minimizes rI.
Note that in the present multiscale modeling scheme, ROMs are deployed on all integration points in the FE mesh of the composite laminate structure. At each integration point, the macroscopic strain increment ΔεM is provided by the FE solver and the cluster-wise stress and strain responses are solved with Eq. (3-3). The homogenized RVE stress increment, denoted as ΔσM, is returned back to the FE solver.
DNS Vs. Reduced Order Model
Two sets of ROMs of the UD RVE are generated with a different number of clusters: 1) 2 clusters in the fiber phase (Kf=2) and 8 clusters in the matrix phase (Km=8); 2) 16 clusters in the fiber phase (Kf=16) and 16 clusters in the matrix phase (Km=16).
To verify that the ROM will produce satisfactory results, a transverse tensile test with maximum 0.02 strain magnitude is performed. The same loading is applied to two ROMs as well. Stress and strain curves for all three cases are plotted in
As mentioned above, a paraboloid yielding function is implemented to consider different tensile and compressive behavior of the epoxy matrix. The matrix damage is modeled using a paraboloid epoxy damage model.
The UD ROM utilizes fiber and matrix to compute UD CFRP's material responses efficiently. In the UD structure modeling, an equivalent damage model is applied to all ROMs to simulate the damage of RVEs. The damage of the RVE at each integration point will reduce load carrying capacity of each element in the macroscale model. When the damage exceeds 0.5, the integration point will lose load carrying capacity. The macroscale element will fail when all integration points in the element have lost load carrying capacity.
UD CFRP Coupon Off-Axial Tensile Simulation Model Setup
With all aforementioned information available, the UD CFRP Coupon specimen multiscale model is illustrated in
UD CFRP Dynamic 3-Point Bending Model Setup
The 3-point bending model is the second test case for validating the efficacy of the proposed framework of UD CFRP. A similar approach to the UD CFRP coupon model is adopted. The model of the UD CFRP hat-section is shown in
Up to this point, all necessary modeling steps of the UD CFRP structure are finished. The concurrent multiscale modeling framework is applied to test problems mentioned above. All test problems are modeled with realistic geometries, hence the FE model utilizes the same boundary conditions given in both tests. The experimental test data is used for validating the proposed UD CFRP concurrent multiscale modeling framework. The prediction capability of the framework is examined against experiments conducted delicately. The results of both test problems and associated discussion are given in the next section.
Results and DiscussionConcurrent simulation results of the off-axial coupon tensile test and the 3-point bending test are discussed in this section. The same UD RVE shown above is used in both cases due to same fiber volume fraction.
UD CFRP Coupon Off-Axial Tensile Simulation
In this section, the concurrent multiscale 10° UD off-axial coupon specimen tensile simulation results are presented and compared against experimental results. A loading rate of 0.0167 mm/s gradually extends the coupon sample in the downwards direction. During the deformation process, a 10° stress band is formed as one can see in the
The comparisons between multiscale coupon simulation and the test data are made for: 1) normal stress vs. normal strain; 2) y direction displacement; 3) y direction strain. Based on the comparisons, two purposes are addressed: 1) To validate the multiscale multiscale modeling framework for the UD CFRP material; 2) To demonstrate that the present UD CFRP multiscale model has considerable prediction capability.
For the FE model, the normal stress is computed using the reaction force computed at the gauge cross-section area divided by the area of the original coupon cross-section. The reaction force of the cross-section near the top tab region of the coupon was recorded during the simulation. The reaction force is then used to compute the normal stress of the multiscale model. The change of the gauge length was used to compute normal strain. Normal stress versus strain of the multiscale model is plotted as blue dots in
Comparing the stress and strain curves from the multiscale model and the experimental data, a good match is observed. The prediction has same trend as the experimental data, as shown in Table 3-3. The predicted maximum stress is 404.81 MPa, which is close to 395.64 MPa reported from the experimental data. In addition, the maximum strain predicted by model is 0.011, which again is in a good match with experimental measured value of 0.012. Both predicted quantities of interest are within 10% deviation from experimental measurements, meaning the UD CFRP concurrent multiscale model is validated against the experimental data.
The y direction displacement and strain fields are validated with experimental results, as shown in panels (a)-)(b) of
Moreover, the predicted crack formation and the actual coupon crack formation are depicted in panels (a)-)(b) of
The ICME multiscale modeling scheme has been validated by the experimental data discuss above. However, the capability of the proposed multiscale modeling scheme is beyond providing accurate prediction of the macroscale quantities. It also provides detailed microstructure evolution of UD CFRP structure for studying the root cause of the failure in the UD coupon.
Using detailed microstructure evolution information illustrated in
UD 3-Point Bending Model
To further illustrate the efficacy of the concurrent scheme, the UD 3-point bending model concurrent simulation is also performed. The fractured hat-section of the simulation and experiment is shown in
After the impact, the hat-section from the numerical prediction and experimental result are plotted in panels (a)-(b) of
In addition to the good match of the UD structure deformation and failure, quantitative comparisons for peak load on the impactor and peak acceleration of the impactor are reported in Table 3-4 with comparison with the experimental data. A reasonable match between prediction and experimental data can be seen. Specifically, the relative differences of peak load and peak impactor acceleration are 8.21% and 2.82%, respectively. Both predictions are within 10% deviation from the experimental data, providing confidence that the macroscopic performance indices can be predicted with the ICME framework.
In panel (a) of
The hat-section deformation and von Mises stress contour under the impactor are shown in the upper half of
Corresponding RVE von Mises stress contours are shown in the lower half of panels (a)-(c) of
For UD CFRP materials, understanding the microstructure failure mechanism provides valuable information in improving CFRP design. Similar to the UD CFRP coupon model, the microstructure evolution of the hat-section is captured and illustrated in the three-snapshot view in
In this subsection, a one-to-one replica of the UD CFRP Coupon 10° off-axial tensile test multiscale simulation and the UD CFRP hat-section 3-pt bending test are resolved using the proposed multiscale modeling framework. Both models are validated against experimental data for validation of the framework. Predictions made by the numerical counterparts are all within 10% difference compared with experimental data. The agreement shows the current work can be used for prediction of other UD CFRP laminate structure. Moreover, the concurrent capture of microstructure evolution provides microstructure evolution history for any location in the FE model of the UD laminate. This allows researchers to look at the detailed microstructure evolution, which is hard to capture experimentally, for the cause of failure at the structural level. New microstructure can be then designed to sustain the loading and improve the overall structural performance.
Briefly, the exemplary study introduces a predictive and efficient ICME multiscale modeling framework for UD CFRP materials. The main workflow of the framework is explained in detail, and experimental validations are provided. The predictive framework links UD CFRP microstructures to structural level models for accurate prediction of the structural performance. Two sample cases studies, the UD off-axial tensile test and the UD hat-section dynamic three-point bending test, are presented using the proposed ICME modeling framework. The predicted performance indices are validated against experimental data confirming a good agreement. Microstructure evolution in the UD structure is captured by UD CFRP RVEs, which reveal microstructural evolution, including stress contour and matrix damage. The ICME framework is general and can be applied to other Fiber Reinforced Polymer (FRP) systems, such as glass fiber reinforced polymer, for structure performance prediction. Along with the microstructure information, the work presented in this example should provide guidance to existing experimental based composites design workflows to accelerate the design process.
Future work of the present multiscale modeling framework for UD CFRP includes: 1) Incorporation of the interphase in the UD RVE for fiber-matrix debonding; 2) Consideration of microstructure uncertainties, such as fiber misalignment and fiber volume fraction, for quantitative measurement of the microstructure effect. 3) Extension to other composite material systems.
Example 4 Data-Driven Self-Consistent Clustering Analysis of Heterogeneous Materials with Crystal PlasticityTo analyze complex, heterogeneous materials, a fast and accurate method is needed. This means going beyond the classical finite element method, in a search for the ability to compute, with modest computational resources, solutions previously infeasible even with large cluster computers. In particular, this advance is motivated by composites design. Here, we apply similar principle to another complex, heterogeneous system: additively manufactured metals.
The complexities and potential benefits of metal additive manufacturing (AM) provide a rich basis for development of mechanistic material models. These models are typically based on finite element modeling of metal plasticity. Powder-bed AM uses a high-power laser or electron beam to melt powder layer-by-layer to produce freeform geometries specified by 3D model files. This approach removes the need for special tooling, allowing for rapid and customized part and product realization. It introduces new possibilities for topological and material optimization, but these tasks require a high degree of knowledge and ability to apply that knowledge, viz. control the build conditions sufficiently. The process involves intense and repeated localized energy input, which results in inhomogeneous, anisotropic, location-dependent material properties with complex microstructures.
The perpetual challenge in multiscale modeling is predicting macroscopic behavior from microstructure conformation and properties in both an efficient and accurate manner. Analytical approaches such as the rule of mixtures and other micromechanics methods are very efficient but lose accuracy particularly when dealing with irregular morphologies and nonlinear properties. In contrast, direct numerical simulations (DNS), offer high accuracy at the expense of prohibitive computational costs to the point where they are inapplicable to concurrent simulations for material design. Recently, data mining has been introduced into the mechanics community to address the limitations of DNS and analytical methods.
In general, data mining is a computational process of discovering patterns in large data sets. Once extracted, these patterns can be used to predict future events. Machine learning methods are the technical basis for data mining, such as clustering and regression methods. Recently, data mining has also been applied to the modeling of heterogeneous materials. As a start, a raw dataset for learning is usually generated from a priori numerical simulations or informed by experiments. Depending on the type of the raw dataset, current data-driven modeling methods can be mainly divided into macroscopic and microscopic approaches.
In macroscopic approaches, the input data are usually material properties of each constituent, loading conditions and statistical descriptors that represent the geometry of the microstructures, while the output data are macroscopic mechanical properties from direct numerical simulations (DNS). However, the accuracy and smoothness of the prediction of macroscopic approaches is limited by a lack of microscopic information. For example, the localized plastic strain fields, critical for plasticity and damage prediction theories, cannot be well represented by their field averages.
To address this problem, microscopic approaches collect data at each discretization point in the DNS. Two methods for making predictions based on gathered local data are worth highlighting: (1) non-uniform transformation field analysis (NTFA) and (2) variants of the proper orthogonal decomposition (POD) method. For both approaches, the predictions under a loading condition are obtained by linear combinations of a finite number of RVE modes from previously completed simulations under various load conditions. Linear combination of eigenmodes is well established, but extra effort is required for the interpolation for nonlinear materials. For NTFA, specific evolution laws of internal variables have to be assumed for each mode of the inelastic field. For POD-based methods, extensive simulations a priori are needed in order to guarantee the robustness of the interpolation under arbitrary loading conditions. However, this still results in an overall decrease in computational cost.
One of the objectives of this exemplary study is to present a grain-level crystal plasticity model to capture local microstructures such as those that occur in AM, e.g., voids and columnar grains. In one embodiment, a recently introduced material modeling approach, based on the theories of data mining and originally developed for composites, is used to vastly increase the speed of these simulations. This allows for higher detail or larger regions of interest, both of which are desirable for predicting damage and fatigue initiation within a part.
The exemplary study is based on a two-stage approach that uses clustering and subsequent analysis of deformation and can account for heterogeneous material behavior with high accuracy and speed, which is called SCA. It is a data-driven method designed to reduce the computational degrees of freedom (DOFs) required for predicting macroscopic behaviors of heterogeneous materials, while local information is retained based in part on clustering near features that induce large stress gradients. The basic idea is to solve the equilibrium equation, not at every material point, but on clusters of material points with similar mechanical responses by assuming that local variables of interest (e.g., elastic strain, plastic strain and stress) are uniform in each of these clusters. The two stages of the method are: offline (training or cluster) and online (prediction).
During the offline stage, material points were grouped into clusters using data mining techniques (such as k-means clustering) based on mechanical similarity. To conduct the online computation, the equilibrium equation was written in an integral form using the Green's function, known as the Lippmann-Schwinger equation. This equation was solved for each cluster using a self-consistent scheme to ensure the accuracy, where the reference stiffness was updated iteratively to be consistent with the macroscopic effective stiffness. The major advantage of SCA is that the DOFs are greatly reduced compared to DNS while retaining both local and global response information.
Here we extend this method to be applicable to crystal plasticity (CP), a class of computational plasticity problems specifically formulated to capture the deformation mechanics of crystalline solids, based on the material microstructure. Anisotropic material models such as CP have been derived for both macroscale problems, such as predicting earing during deep drawing, and microscale problems, such as the deformation of nanowires.
SCA FrameworkOffline Stage: Mechanistic Material Characterization
Grouping material points with similar mechanical behavior into a single cluster is performed by domain decomposition of material points using clustering methods. First, the similarity between two material points is measured by the strain concentration tensor A(x), which is defined as
εmicro(x)=A(x):εmacroinΩ, (4-1)
where εmacro is the elastic macroscopic strain corresponding to the boundary conditions of the Representative Volume Element (RVE), and εmicro(x) is the elastic local strain at point x in the microscale RVE domain Ω. For a 2-dimensional (2D) model, A(x) has nine independent components, requiring a set of elastic direct numerical simulations (DNS) under three orthogonal loading conditions to uniquely define. For a 3-dimensional (3D) model, A(x) has 36 independent components which are determined by DNS under six orthogonal loading conditions. Once the strain concentration tensor is computed, it is independent of the loading conditions for a linear elastic material, and its Frobenius norm is an invariant under coordinate transformation.
For overall responses of nonlinear plastic materials, we have demonstrated that the elastic strain concentration tensor is a good offline database. However, if the local response is of more interest, the elastic strain concentration tensor often does not provide high enough cluster density near the high strain concentration region. In polycrystalline material with crystal plasticity, all the crystals deform uniformly in the elastic regime, providing no effective data for computing the strain concentration tensor. In these cases, one can choose other types of material responses to construct the offline data, and thus achieve adequate resolution at the region of interest. For example, we choose the plastic strain tensor from DNS calculations for clustering when local plasticity information is required, such as for predicting fatigue initiation. We show how the choice of different material responses affect overall response prediction.
The k-means clustering method is used to group data points based on a grouping metric. For present, let us consider this to be the strain concentration tensor A(x). Since all the material points in a cluster are assumed to have the same mechanical response, the number of the degrees of freedom is reduced from, e.g., the number of elements in a FEM simulation to the number of clusters. Note that clusters are formed based on the strain concentration tensor and thus do not need to be spatially adjacent to each other.
A primary assumption associated with the domain decomposition is that any local variable β(x) is uniform within each cluster. Globally, this is equivalent to having a piece-wise uniform profile of the variable in the RVE:
β(x)=ΣI=1kβIχI(x), (4-2)
where βI is the homogeneous variable in the Ith cluster, and k is the total number of clusters in the RVE. The domain of the Ith cluster ΩI is distinguished by its characteristic function χI(x), which is defined as
This piecewise uniform approximation in Eq. (4-2) enables us to reduce the number of degrees of freedom for the Lippmann-Schwinger equation, which is solved in the following online stage. After the domain decomposition based on a prior DNS, the remaining task in the offline stage is to pre-compute the interaction tensors between all the clusters.
In the discretized/reduced Lippmann-Schwinger equation, we can utilize the piecewise uniform assumption to extract the interaction tensor DIJ, which represents the influence of the stress in the Jth cluster on the strain in the Ith cluster. In an RVE domain Ω with periodic boundary conditions, the interaction tensor can be written as a convolution of the Green's function and the characteristic functions defined in Eq. (4-3):
where cI is the volume fraction of the Ith cluster and |Ω| is the volume of the RVE domain. Φ0(x,x′) is the fourth-order periodic Green's function associated with an isotropic linear elastic reference material and its stiffness tensor is C0. Specifically, this reference material is introduced in the online stage as a homogeneous media to formulate the Lippmann-Schwinger integral equation. With the periodicity of the RVE, Φ0(x,x′) takes the following form in the Fourier space,
where ξ is the coordinate in Fourier space corresponding to x in real space, and δij is the Kronecker delta function. λ0 and μ0 are Lamé constants of the reference material. The expression of {circumflex over (Φ)}ijkl0(ξ) is not well defined at frequency point ξ=0. However, by imposing the boundary conditions for deriving the Green's function, a uniformly distributed polarization stress field will not induce any strain field inside the RVE. As a result, we have
{circumflex over (Φ)}ijkl0(ξ=0)=0. (4-8)
Based on Eq. (4-5), the convolution term in the spatial domain in Eq. (4-4) can be translated into a direct multiplication at each point in the frequency domain using a Fourier transformation,
As we can see from Eq. (4-6), {circumflex over (Φ)}1(ξ) and {circumflex over (Φ)}2(ξ) are independent of the material properties, so that they can be computed once, in the offline stage. If the reference material is changed in the self-consistent scheme during the online stage only the coefficients relating to the reference Lamé constants in Eq. (4-5) need to be updated. For RVEs with microstucture size close to the RVE size or even with a connected microstructure network, such as a woven composite, a correction of DIJ is needed to satisfy the boundary conditions on the RVE.
Currently, we have applied the SCA offline stage to 2D and 3D materials with uniform (regular hexahedral or “voxel”) meshes, so that the Fast Fourier transformation (FFT) method can be used for efficiently computing Eq. (4-9). Although the domain decomposition is based on a specific selection of properties for each material phase in the offline stage, the same database can be used for predicting responses for new combinations of material constituents in the online stage.
Online Stage: Self-Consistent Lippmann-Schwinger Equation
The equilibrium condition in the RVE can be rewritten as a continuous Lippmann-Schwinger integral equation by introducing a homogeneous reference material,
Δε(x)+∫ΩΦ0(x,x′):[Δσ(x′)−C0:Δε(x′)]dx′−Δε0=0, (4-10)
where Δε0 is the far-field strain increment controlling the evolution of the local strain. It is uniform in the RVE. The reference material is isotropic and linear elastic. Its stiffness tensor C0 can be determined by the two independent Lamé parameters λ0 and μ0,
C0=f(λ0,μ0)=λ0I⊗I+μ0II. (5-11)
where I is the second-rank identity tensor, and II is the symmetric part of the fourth-rank identity tensor. The strain and stress increments are Δε(x) and Δσ(x). By averaging the incremental integral equation, Eq. (4-10), in the RVE domain Ω, we have
Imposing by the boundary conditions for deriving the Green's function, Eq. (4-8) can be equivalently written as
∫ΩΦ0(x,x′)dx=0. (4-13)
Substituting Eqs. (4-13) into (4-12) gives
which indicates that the far-field strain increment is always equal to the ensemble averaged strain increment in the RVE. In order to solve Δε(x) in the integral equation, Eq. (4-10), constraints are needed from the macroscopic boundary conditions. The macro-strain constraint can be written as
where Δ
For more general cases, mixed constraints (e.g., strain constraint in the 11-direction and stress constraints for the rest) can also be formulated.
As the full-field calculations (e.g., FFT-based method) of the continuous Lippmann-Schwinger equation may require excessive computational resources, we will perform the discretization of the integral equation based on the domain decomposition in the offline stage. With the piecewise uniform assumption in Eq. (4-2), the number of degrees of freedom and the number of internal variables in the new system can be reduced. After decomposition, the discretized integral equation of the Ith cluster is:
ΔεI+ΣJ=1kDIJ:[ΔσJ−C0:ΔεJ]−Δε0=0, (4-17)
where ΔεJ and ΔσJ are the strain and stress increment in the Jth cluster. The interaction tensor DIJ is defined in Eq. (4-4), which is related to the Green's function of the reference material. After discretization, the far field strain is still equal to the average strain in the RVE,
Δε0=ΣI=1kcIΔεI. (4-18)
The macroscopic boundary conditions also require discretization. For instance, the discrete form of the macro-strain constraint can be written as
ΣI=1kcIΔεI=Δ
In the new reduced system, the unknown variables are the strain increments in each cluster ΔεI. Significantly fewer clusters than FE nodes means that the ROM is much faster to solve. In a general case such as plasticity, the stress increment ΔσJ is a nonlinear function of its strain increment ΔεJ, and Newton's method is used to solve the nonlinear system iteratively for each increment.
The solution of the continuous Lippmann-Schwinger equation (4-10) is independent of the choice of the reference material C0. This is because the physical problem is fully described by the equilibrium condition and the prescribed macroscopic boundary conditions. However, the equilibrium condition is not strictly satisfied at every point in the RVE for the discretized equations because of the piecewise uniform assumption, and the solution of the reduced system depends on the choices of C0. This discrepancy can be reduced by increasing the number of clusters used, at the cost of increased computational cost due to the increased degrees of freedom.
To achieve both efficiency and accuracy, we propose a self-consistent scheme in the online stage, which retains accuracy with fewer clusters. In the self-consistent scheme, the stiffness tensor of the reference material, C0 is approximately the same as the homogenized stiffness tensor C,
C0→C. (4-20)
Material non-linearity generally makes it impossible to determine an isotropic C0 exactly matching C. Here we propose two types of self-consistent schemes to approximate Eq. (4-20): 1) linear regression of average strain increment Δ
Regression-Based Self-Consistent Scheme
In the regression-based scheme, the self-consistent scheme is formulated as an optimization problem: the goal is to find an isotropic C0 that minimizes the error between the predicted average stress increments. The inputs of the regression algorithm are the average strain increment a and stress increment Δ
Δ
The objective of the regression-based scheme is to find the λ0 and μ0 of the reference material by computing
where ∥Z∥2=Z:Z for an arbitrary second-order tensor Z. The function ƒ(λ′,μ′) can be expressed as
ƒ(λ′,μ′)=λ′I⊗I+μ′II. (4-23)
where I is the second-rank identity tensor, and II is the symmetric part of the fourth-rank identity tensor. Equivalently, the cost function g(λ′,μ′) of the optimization problem can be written as
g(λ′,μ′)=∥Δ
The optimum point is found via the respective partial derivatives of the cost function,
which forms a system of two linear equations in terms of the Lamé constants. The system always has a unique solution except under a pure-shear loading condition, where λ0 is under-determined. In this case, the value of λ0 is not updated. Additionally, g(λ0,μ0) vanishes when the effective macroscopic homogeneous material is also isotropic linear elastic.
Although this scheme does not require computing C explicitly, it has two main drawbacks. First, the optimization problem is under-determined for hydrostatic and pure shear loading conditions, forcing one of the two independent elastic constants to be assumed. Second and more important, the modulus of the optimum reference material may be negative for complex loading histories within a concurrent simulation, which is deleterious to the convergence of the fixed-point method.
Projection-Based Self-Consistent Scheme
To avoid the difficulties of the regression-based scheme, we present another self-consistent scheme based on isotropic projection of the effective stiffness tensor C. Through the homogenization, the effective stiffness tensor C of the RVE can be expressed as
C=ΣI=1kcIcICalgI:AI, (4-26)
where CalgI is the algorithm stiffness tensor of the material in the Ith cluster and is an output of the local constitutive law for the current strain increment in the cluster,
The strain concentration tensor of the Ith cluster AI relates the local strain increment in the Ith cluster ΔεI to the far-field strain increment Δε0,
ΔεI=AI:Δε0, (4-28)
which can be determined by first linearizing the discretized integral equation (4-17) using CalgI and then inverting the Jacobian matrix. Since C is only required for the self-consistent scheme, the calculation of C can be performed once, after the Newton iterations have converged, to save computational cost.
For a 3D problem, the stiffness tensor of the isotropic reference material C0 can be decomposed as
C0=(3λ0+2μ0)J+2μ0K, (4-29)
where the forth-rank tensors J and K are defined as
J=⅓(I⊗I) and K=II−J. (4-30)
Since the two tensors are still orthogonal, we have
J::K=0,J::J=1,K::K=5. (4-31)
Based on Eq. (4-31), the projection from the homogenized stiffness tensor C to C0 can be expressed as
C0=Ciso=(J::C)J+1/5(K::C)K. (4-32)
Meanwhile, the Lamé parameters λ0 and μ0 of the reference material can also be determined from the isotropic projection. Since C0 is approximated based on C directly, it is guaranteed to be positive definite the condition of C. Overall, the projection-based scheme can be considered a relaxation of the regression-based scheme.
Summary of the Online Stage
In both schemes, the optimum reference material must be determined iteratively since the values of C in Eq. (4-26) are obtained by solving the reduced system with a previous C0. A fix-point method is employed here. For this method, the convergence of the reference material parameters can be reached in only a few iterations in our experience (i.e., less than five reaches a tolerance of 0.001).
The general algorithm for solving the self-consistent Lippmann-Schwinger equation is as follows:
-
- 1. Initial conditions and initialization: set (λ0,μ0); {ε}0=0;n=0; {Δε}new=0;
- 2. For loading increment n+1, update the coefficients in the interaction tensor DIJ and the stiffness tensor of the reference material C0;
- 3. Start Newton iterations:
- (a) compute the stress difference {Δσ}new based on the local constitutive law (b) use that compute the residual of the discretized integral equation (4-17): {r}=ƒ({Δε}new,{Δσ}new);
- (c) compute the system Jacobian {M};
- (d) solve the linear equation {dε}=−{M}−1{r};
- (e) {Δε}new←{Δε}new+{dε}; and
- (f) check error criterion; if not met, go to 3(a);
- 4. Calculate (λ0,μ0) using the regression-based scheme (4-22) or the projection-based scheme (4-32);
- 5. Check error criterion; if not met, go to step 2;
- 6. Update the incremental strain and stress: {Δε}n+1={Δε}new, {Δσ}n+1={Δσ}new;
- Update the index of loading increment n←n+1; and
- 7. If simulation not complete, go to step 2.
The relative iterative error criterion to the L2 norm of the residual is used. If all the phases of the material are linear elastic, the Newton iteration will converge in one step. Note that if the self-consistent scheme is not utilized for in the calculation, a constant stiffness tensor C0 will be used, which can be chosen to be same as the matrix material. In this case, C0 is not updated, which implies that the interaction tensors DIJ do not need to be updated, and steps 4-5 in the algorithm can also be skipped. Although the algorithm with a constant C0 can save time in terms of finding the optimum C0, the accuracy in predict nonlinear material behavior cannot be guaranteed with a small number of clusters.
Crystal Plasticity ModelIn this work, we present an elasto-plastic, anisotropic, heterogeneous plasticity model of the mechanical response of crystalline materials, to be solved in the SCA framework described in above. The mechanical model is an implementation of so-called crystal plasticity (CP) constitutive laws. The finite element scheme used therein to solve the variational form of the equilibrium equations is replaced with the SCA scheme and its FFT-basis. Thus, in some regards this begins to resemble recent CP-FFT formulations, with the addition of an the offline/online decomposition outlined above.
Brief Overview
Crystal plasticity in conjunction with the finite element method (termed CPFEM) has been applied to solve both microscopic and macroscopic problems, following from the early combinations of classical plasticity and the finite element method. It has two primary variants: polycrystal and single crystal plasticity. In the polycrystal formulation, each material point is assumed to represent a collection of crystals such that the overall response of the point is homogeneous. In single crystal plasticity, each material point is assumed to represent a single crystal, or a point in a single crystal, the deformation of which is governed by the particularities of single crystal deformation mechanics (e.g., active slip systems and/or dislocation motion). The former approach is more commonly used for macroscopic problems, where a relatively large solution volume is desired. The later is the focus of this study. There are many versions of crystal plasticity laws in both forms. Here the basic kinematics and constitutive law are discussed below.
Kinematics
The deformation gradient F can be multiplicatively decomposed as:
F=FeFp (4-33)
where the plastic part Fp maps points in the reference configuration onto an intermediate configuration which is then mapped to a current configuration through the elastic part Fe. Note that physically Fp is associated with the dislocation motion and Fe is a combination of the elastic stretch and rigid body rotation.
The effect of dislocation motion is modeled by relating the plastic velocity gradient {tilde over (L)}p in the intermediate configuration (usually denoted by {tilde over (□)}) to simple shear deformation γ(α):
{tilde over (L)}p=Σα=1N
where ⊗ is the dyadic product, Nslip is the number of slip systems, {dot over (γ)}(α) is a shear rate, {tilde over (s)}(α) is the slip direction, and ñ(α) is the slip plane normal, all for a crystal slip systems (α) in the intermediate configuration. The relationship between {tilde over (L)}p and Fp is given by
{tilde over (L)}p={dot over (F)}p·(Fp)−1. (4-35)
Constitutive Law
The final task in constructing the crystal plasticity framework is defining the constitutive laws of elasto-plasticity. We choose a basis of the Green-Lagrange strain Ee and Second Piola-Kirchhoff stress Se, from the many conjugate pairs available, which are related by:
Se={tilde over (C)}·Ee=1/2{tilde over (C)}·[(Fe)TFe−I], (4-36)
where the elastic stiffness tensor {tilde over (C)} is defined in the intermediate configuration.
A phenomenological power law for the plastic shear rate in each slip system given by
is used, where τ(α) is the resolved shear stress, a(α) is a backstress that describes kinematic hardening, {dot over (γ)}0 is a reference shear rate, τ0(α) is a reference shear stress that accounts for isotropic hardening, and m is the material strain rate sensitivity. Shear stress is resolved onto the slip directions with:
τ(α)=σ:(s(α)⊗n(α)), (4-38)
where σ, s(α) and n(α) are the Cauchy stress, slip direction and slip plane normal respectively, all of which are in the current configuration. The Cauchy stress is given by:
where Je is the determinate of Fe. The relationship between s(α) and {tilde over (s)}(α) is given by
s(α)=Fe·{tilde over (s)}(α), (4-40)
and the relationship between n(α) and ñ(α) is given by
n(α)=ñ(α)·(Fe)−1 (4-41)
which ensures that the slip plane normal vector remains orthogonal to the slip direction in the current configuration.
The reference shear stress τ0(α) evolves based on the expression:
{dot over (τ)}0(α)=HΣβ=1N
where H is a direct hardening coefficient and R is a dynamic recovery coefficient and qαβ is the latent hardening ratio given by:
qαβ=χ+(1−χ)δαβ (4-43)
where χ is a latent hardening parameter. The backstress a(α) evolves based on the expression:
{dot over (a)}(α)=h{dot over (γ)}(α)−ra|{dot over (γ)}(α)|, (4-44)
where h and r are direct and dynamic hardening factors respectively.
A computational crystal plasticity algorithm needs to solve a set of non-linear equations from Eq. (4-33) to Eq. (4-44). Different numerical methods can be used to solve these equations. McGinty and McDowell gave an implicit time integration algorithm for the material law with the finite element method. However, the SCA method uses Fast Fourier Transformation method, CP alorithms have been shown to be effective in this framework as well. Here we simply implement the same crystal plasticity model in our SCA and FEM calculations, albeit with a slight variation in how the deformation gradient, F, is computed.
EXAMPLESIn this section, three example cases probing the capabilities of SCA are implemented with the CP routine described. First, the SCA method is validated for a multi-inclusion system with J2 plasticity. Second, a simple case of a spherical inclusion in a single-crystal matrix is shown. Finally, the complexity of the system is increased by simulating a polycrystalline cube with equiaxed, randomly oriented grains.
Multi-Inclusion System with J2 Plasticity
The SCA method is firstly validated for a multi-inclusion system using a much simpler material law: J2 plasticity. The inclusion phase is elastic with Young's modulus Ei=500 MPa and Poisson's ratio vi=0.19. The matrix phase is elasto-plastic with Em=100 MPa and vm=0.3 in the elastic regime, and it has a von Mises yield surface (J2 plasticity) and a piece-wise hardening law depending on the effective plastic strain εp:
The inclusions are identical to each other and the volume fraction of the inclusion phase is equal to 20%. The mesh size for the high-fidelity finite element model is 80×80×80. The clustering results based on the strain concentration tensor A(x) are shown in
The stress-strain curves predicted by the regression-based and projection-based self-consistent schemes are given in
For this 80×80×80 mesh, the DNS based on FEM typically takes 25 hours on 24 cores (in a state-of-the art high performance computing cluster with two 12-core/processor Intel Haswell E5-2680v32.5 GHz processors per compute node). With the same number of loading increments, the SCA reduced order method (in MATLAB) only takes 0.1 s, 2 s and 50 s on one Intel i7-3632 processor for km=1, 16 and 256, respectively.
Spherical Inclusion with Crystal Plasticity
The crystal plasticity law is introduced for the simplest geometric case here, that approximating the 3D Eshelby problem: a spherical inclusion/void embedded in an infinite (periodic boundary) single-crystal matrix. A schematic of the geometry is shown in
To solve for the overall and local response of this geometry, an appropriate choice of data for the domain decomposition stage must be made. Using the strain concentration tensor and the elastic response provides a reasonable overall match in load history to the DNS solution, but the local solution (near the inclusion) does not match well. As noted above, different variables may be used to conduct the clustering. Panel (a) of
Once the clusters are determined, total and local solutions for stress and strain can be computed with the SCA reduced order method with crystal plasticity, denoted as CPSCA, in the online stage. The overall solution corresponding to the clustering in
For this 40×40×40 mesh, the DNS CPFEM implemented as a user material in Abaqus typically takes 4600 seconds on 24 cores. With the same number of loading increments, CPSCA (in FORTRAN) only takes 5 seconds on one Intel i7-3632 processor using 16 clusters in the matrix.
Polycrystalline RVE
In this embodiment, CPSCA is used to predict the overall response of a RVE including equiaxed, randomly oriented grains with the fully developed plastic strain tensor calculated a priori as offline database. An example of such a RVE is shown in
We see that the overall response for the coarser cases converge to very similar solutions when element or clusters are added. CPSCA results in harder response than the CPFEM solutions when very coarse clustering (e.g., 1 cluster/grain) is used. This is not an exceptional result, because SCA uses a FFT solution based on the Lippmann-Schwinger equation.
The full 3D solution state for S33 at 5% averaged strain is shown in the opacity and color contour plots shown in
Again, the DNS CPFEM implemented as a user material in Abaqus typically takes 587 seconds, 5177 seconds, and 31446 seconds for the 20×20×20, 30×30×30 and 40×40×40 mesh respectively, on 24 cores. With the same number of loading increments, CPSCA (in FORTRAN) only takes 18 seconds, 96 seconds and 793 seconds using 1 cluster/grain, 2 clusters/grain and 4 clusters/grain respectively on one Intel i7-3632 processor.
In sum, we have presented herein a method to dramatically decrease the computational cost associated with conducting microscopic crystal plasticity simulations, of the type that can be used to calibrate homogenization models, or to investigate the mechanics of processes pertaining to damage or localization within metals. This was motivated by a desire to predict the mechanical response of material resulting from the additive manufacturing process—a challenge of great interest recently. In these materials, microscale and mesoscale factors (e.g., voids of different sizes, grains sizes dependent on physical location) are of interest, necessitating a fast method to predict micromechanical solutions over a relatively large volume. The method is demonstrated with three examples: J2 plasticity in a multi-inclusion system, a simple void-like inclusion embedded within a single-crystal matrix, and a polycrystalline RVE of equiaxed grains.
Example 5 Inverse Modeling Approach for Predicting Filled Rubber PerformanceIn this example, a computational procedure combining experimental data and interphase inverse modeling is presented to predict filled rubber compound properties. The FFT based numerical homogenization scheme is applied on the high quality filled rubber 3D Transmission Electron Microscope (TEM) image to compute its complex shear moduli. The 3D TEM filled rubber image is then compressed into a material microstructure database using a novel Reduced Order Modeling (ROM) technique, namely Self-consistent Clustering Analysis (a two-stage offline database creation from training and learning, followed by data compression via unsupervised learning, and online prediction approach), for improved efficiency and accuracy. An inverse modeling approach is formulated for quantitively computing interphase complex shear moduli in order to understand the interphase behaviors. The two-stage SCA and the inverse modeling approach formulated a three-step prediction scheme for studying filled rubber, whose loss tangent curve can be computed in agreement with test data.
Composite materials in general exhibit improved mechanical behaviors compared to their basic constituents. Such characteristics provide a window for creating specific materials to satisfy requirements that are hard to meet by materials without specific treatments. It is well-known that the properties of Polymer Nano-Composites (PNCs) differ from pure polymer components partly due to an interphase region between polymer and filler. Thus, it is possible to design specific properties by adding fillers into polymer components (such as rubber compounds) to achieve different viscoelastic behaviors compared to pure polymers without fillers. In this exemplary study, a computational framework for the efficient evaluation of filled rubber properties and interphase property inverse modeling for improving filled rubber properties prediction is disclosed.
In the rubber and tire industry, reduction of loss tangent (or tan(δ)) can reduce rolling resistance whereas an increase of loss tangent provides more friction between the tire and the ground. Experimental studies reveal that fillers in polymer compounds indeed result in different viscoelastic behavior for PNC vs. pure polymer compound. For styrene-butadiene rubber, the addition of carbon black filler reduces tan(δ) in the low-temperature region but increases tan(δ) in the high-temperature region. Moreover, Brinson et al. conducted a study of styrene-butadiene rubber with different fillers and concluded that fillers enhanced the overall rigidity of the PNC but reduced tan(δ). Therefore, various tire properties can be achieved through custom designed polymer nano-composite (PNC), or filled rubber.
The cause of change in viscoelastic behavior between PNC and the pure polymer has been studied extensively. Due to the exponential growth of computational power over the past decades, researchers are able to utilize Molecular Dynamics (MD) simulations to capture the effect of fillers in polymer compounds by observing the interaction between polymer chains and fillers. Polymer chains aggregate around added filler material, creating a denser layer of polymers. Such results are due to van der Waals interactions between polymers and fillers. Results obtained from MD agree well with experimental observations, where the interphase exhibit stiffer responses compared to the polymer matrix. To characterize such a change of polymer structure, the interphase can be used to distinguish different viscoelastic behaviors of PNC and pure polymer compound. In this work, a filled polymer system using a three-phase model is considered in the current work: these phases are the filler, interphase, and polymer. The three-phase model is applied to the filled rubber sample studied in the present work; this model should be able to capture the difference in viscoelastic performance between filled and unfilled rubber.
Rubber properties can be experimentally measured by dynamic mechanical analysis (DMA). The experimental procedures provide viscoelastic properties, i.e., storage and loss moduli, of the rubber compound. As a result, DMA provides overall homogenized properties of the rubber. Numerically, macroscopic properties can be determined through various homogenization approaches. The aforementioned 3D TEM process has been applied to a different filled rubber geometry which has already been studied via the Finite Element (FE) method to obtain the local stress response. However, in this past work, the filled rubber geometry provided by 3D TEM as a 3D digital image was only converted to a fine conforming FE mesh which could only be used on a supercomputer. Voxel meshes of large sizes such as 3D digital images provided by 3D TEM are more suitable for computational homogenization using the FFT based numerical method. The digital image can be directly used in the FFT based algorithm to solve for local and overall responses under designated macroscopic boundary conditions. Therefore, the properties of the filled rubber can be directly obtained via the FFT algorithm from using the 3D image of the filled rubber.
The existing experimental techniques allow measurement of tan(δ) curves (also known as master curves), of filled and unfilled rubber through DMA. Therefore, if the filled rubber is assumed to be a two-phase model with rubber and filler phases, it is possible to conduct a numerical simulation of its responses at different frequencies using measured properties of unfilled rubber and filler. In the present work, the property of interphase is unknown, which makes it hard to predict the filled rubber's master curve through basic constituents: unfilled rubber and filler. In this exemplary example, we combined interphase modeling via the so-called inverse modeling technique and an efficient FFT-based homogenization scheme to compute the master curve of a filled rubber using a fine mesh reconstructed from 3D TEM. The master curve is validated by experimental results to illustrate the efficacy of our proposed scheme. Comparison of the FFT method and the SCA is performed to examine the improved computational efficiency of this reduced order modeling-based approach.
The following sections focus on the experimental study of filled rubber and 3D TEM reconstruction of a filled rubber sample, details on the FFT algorithm employed, the SCA method, the inverse modeling scheme for interphase properties computation, and results and discussion. We demonstrate an efficient integrated experimental and modeling approach that combines the merits of an FFT homogenization algorithm, data-driven SCA, with an inverse modeling technique. With 3D TEM reconstruction of a filled rubber sample and experimental DMA data, we gain an understanding of interphase properties in filled rubber components.
Experimental Study of the Filled RubberPreparation of Unfilled and Filled Rubber Samples
The unfilled rubber sample used in the present study is made of poly-isoprene (IR2200) with some chemical agents such as sulfur, stearic acid, microcrystalline wax, zinc oxide and etc. In the first stage of mixing, poly-isoprene and agents other than curing agents were mixed. Then curing agents, such as sulfur, were added. The mixture of polymer and agents was cured with a high temperature to obtain a vulcanized unfilled rubber sample. The filled rubber sample is made of poly-isoprene, silica particles (Zeosil 1165MP), and some chemical agents. It was obtained in the same way except that silica particles are added in the first stage of mixing. The formula of these rubber samples is summarized in Table 5-1.
Experimental Results on Master Curve Measurements of Filled and Unfilled Rubber Samples
Storage and Loss modulus of filled and unfilled rubber samples were measured by a dynamic mechanical analysis (DMA) (TA Instruments, rubber rheometer ARES-G2). Cylindrical samples with a diameter of 8 mm and a height of 6 mm were used. The measurements were operated at 0.1% oscillatory shear strain in which the material response is in the linear viscoelasticity region. In order to obtain the viscoelastic responses in a wide frequency range, a frequency sweep from 0.5 Hz to 50 Hz was operated at a temperature range of −60° C. to 40° C. Master curves were obtained by time-temperature superposition with the reference temperature of 25° C. where only horizontal shift was performed. It is clearly indicated in
3D-TEM Reconstruction of Filled Rubber Sample
The filled rubber was made into thin sections with about 100 nm thickness using a focused ion beam (JEOL, JEM-9310FIB) at a cryogenic temperature at −150° C. The ultrathin section was transferred onto a Cu mesh grid with a polyvinyl formal supporting membrane. Prior to the electron microscopy experiments, gold particles 5 nm in diameter were placed on the ultrathin section from colloidal aqueous solution.
We conducted 3D observations by TEM and 3D-TEM using a JEM-2200FS microscope (JEOL, Ltd., Japan) operated at 200 kV. The electron microscope was equipped with a slow-scan USC 4000 CCD camera (Gatan, Inc., USA). Elastically scattered electrons (electron energy loss: 0±40 eV) were selected by an energy filter installed in the microscope (Ω filter, JEOL Ltd., Japan).
A series of TEM images were acquired at tilt angles in the range of −66° to 73° at an angular interval of 1°. Subsequently, the tilt series of the TEM images were aligned by the fiducial marker method, using gold nanoparticles as the fiducial markers. The tilt series of TEM images after the alignment were reconstructed by filtered back-projection (FBP). It took us about 2 hours and a few days, respectively, to take 140 TEM tilt images on TEM and to align those projections before the FBP reconstruction. In addition, segmenting out the fillers from the rubber matrix in each digitally-sliced image has been done before stacking them to generate a 3D image, which takes, typically, 1 to 2 weeks. The basic protocol used here is essentially the same as the one in which we demonstrated less than 1 nm resolution, i.e., 0.5 to 0.8 nm. The reconstructed filled rubber is shown in
Formulation of Fast Fourier Transform Scheme
3D TEM generates a high-fidelity 3D reconstruction of a filled rubber sample as a 3D digital image with a resolution of the sub-nanometer, as mentioned in the previous section. The nature of 3D TEM leads to very large structured voxel meshes making it hardly feasible for FE computation due to the fine resolution. The FE mesh was generated directly from the high-resolution 3D TEM data as a voxel mesh with a large total number of degrees of freedom. To make the reconstruction suitable for FE analysis, the 3D digital image had to be converted into a conforming mesh. However, such process required extra time and resources in preprocessing before the actual simulation. Recent development in FFT homogenization scheme provides an alternate solution for solving boundary value problems using a structured mesh. The FFT homogenization scheme avoids this conversion process and makes computation straightforward using the 3D digital image from the 3D TEM process. Although the convergence of the FFT scheme for arbitrary phase contrasts and its efficiency can still be improved, it is a powerful homogenization scheme for 3D images. For example, the FFT scheme bypasses the need for mesh generation, which is required by the FE method, and reduces the problem size so that it can be solved on a personal computer. Although the large input image might require larger computer memory due to an increase of the total number of degrees of freedom, the FFT scheme enables a much easier approach when a high-resolution 3D digital image is provided.
In this exemplary example, we first adopt the FFT scheme assuming small strain based on the system of equations shown in Eq. (5-1).
where the applied displacement field u* is periodic over the computation domain Ω0. The equilibrium condition for any input mesh is given as below following conservation of linear momentum:
∇·σ(X)=0,∀X∈Ω0 (5-2)
In the FFT scheme, each voxel in the input image represents a material point. The location of the voxel is represented by X. Stress and strain tensors at all material points are computed in the FFT scheme. The local stress σ(X) can be computed using any given constitutive law, but we assume linear elasticity for now.
To solve Eq. (5-2), the FE approach would formulate the Cauchy momentum equation with periodic boundary conditions. Here, the local strain is given as in Eq. (5-1). It is composed of sym(∇u*), the symmetric part of the gradient of the periodic displacement field u*, and εMacro, the prescribed macroscopic strain tensor. Introducing the polarization stress T and the Green's operator Γ0, it is possible to express to the local strain as a Lippmann-Schwinger equation:
ε(X)=−(0*τ)(X)+εMacro (5-3)
The polarization stress τ and the explicit form of the Green's operator 0 in Fourier space are defined in Eq. (5-4)
τ(X)=0:ε(X)−σ(X) (5-4)
where 0 is the standard stiffness tensor of an isotropic reference material, written as Cklmn0=λ0δklδmn2μ0 δkmδln in index notation, with reference Lamé parameters λ0 and μ0.
where the indices of {circumflex over (Γ)}klmn0 coincide with those of Cklmn0.
Since the explicit form of the Green's operator is known only in Fourier space, the convolution term in Eq. (5-4) is computed with the help of the inverse Fourier transform as in Eq. (5-6).
0*τ(X)=−1{{circumflex over (Γ)}klmn0(ξ)[τkl(X)]} (5-6)
where and denote respectively the FFT and the inverse FFT.
To solve the above FFT formulation, various iterative methods can be used, such as fixed point iteration and conjugate gradient. The solution techniques are vastly available in the literature. For demonstration purposes, the FFT algorithm is presented using fixed point iteration in Appendix A.
The iteration process of the FFT algorithm starts from a given initial local strain and checks for convergence on the local ε(X). During the iteration process, Green's function enforces the compatibility condition given in Eq. (5-1). To obtain the macroscopic stress and strain tensors from the mesh, volumetric average following Hill's lemma are conducted as in Eq. (5-7):
To obtain effective elastic material properties μMacro and λMacro, σMacro and εMacro are plugged into Hooke's law as in Eq. (5-8):
σijMacro=λMacroεkkMacroδij+2μMacroεijMacro (5-8)
It is convenient to compute μMacro and λMacro by solving Eq. (5-8). One can re-write Eq. (5-8) in matrix format and solve for μMacro and λMacro.
Application of FFT Scheme for Frequency Domain Computation
For rubber materials or viscoelastic materials in general, responses are drastically different for various loading frequencies; DMA is a common experimental method to evaluate this variation. DMA provides viscoelastic material properties, such as the complex Young's modulus, E*=E′+iE″, and the complex shear modulus, at different frequency points, denoted as ωk. The ratio between the E″, the imaginary part of the complex Young's modulus, and E′, the real part of the complex Young's modulus, gives the tan(δ) curve. Complex young's modulus can be interchangeable with complex shear modulus G*=G′+iG″.
For DMA, a sinusoidal strain with a given frequency ωk is applied to the rubber and the steady state stress is measured to compute viscoelastic material properties. In the 1D case, the stress at a given peak strain ε0Macro can be written as:
where, for any complex number z, [z] is the real part of z.
A steady-state solution for σMacro(t) can be found by simply taking
The steady state stress can then be treated as a complex one, written as σMacro,*=ε0Macro(E′(ωk)+iE″(ωk). σMacro,* is composed of a real part and an imaginary part. For the 1D case, taking the quotient of σMacro,* and ε0Macro yields the complex Young's modulus. Reciprocally, the complex stress can be computed by the FFT scheme by inputting the complex Young's modulus.
The above operation is to be performed at a given frequency point ωk. To compute the rubber's tan(δ) curve, an individual tan(δ) point at different ωk is needed, where tan(δ) is defined as
for tensile DMA and
for shear DMA. Therefore, by computing the rubber's responses at different frequency points, the complex Young's modulus or shear modulus can be obtained and tan(δ) can then be computed. When a sufficient number ωk is taken, a smooth tan(δ) curve of filled rubber can be reconstructed.
In this exemplary example, shear DMA of both unfilled and filled rubber are conducted to reconstruct master curves. However, due to the limitation of experimental conditions, only complex shear moduli at different frequencies are available. An assumption is that for a complex shear modulus, a conversion to the Lamé constants is still valid. This enables computation of tan(δ) using the FFT scheme at various frequency points for the filled rubber by inputting properties of basic constituents: unfilled rubber and fillers.
A computation using unfilled rubber complex shear moduli at different frequency points and filler properties is performed using the mesh introduced above. To make the computation more feasible, the filled rubber domain is shrunk by ½ in all three directions to 513×513×75 voxels, where the length of each voxel's edge is 1.62 nm. Poisson's ratio of the unfilled rubber is taken as 0.499 allowing for limited compressibility of rubber materials. Note that the aforementioned assumption can be improved by choosing a frequency dependent Poisson's ratio to account for limited compressibility in the glassy state, given enough experimental data from material characterizations. This will be addressed in future work. The filler material has Young's modulus E=300 MPa and Poisson's ratio ν=0.19. The homogenized complex shear modulus from the FFT scheme is used to compute the tan(δ) curve. The computed tan(δ) curve and the experimental result are shown in
The results in
The predicted G′ and G″ of filled rubber model are shown in panels (a)-(b) of
Such an inconsistency observed in
In terms of the effect of filler moduli, Appendix C gives predicted G′ and G″ of the same filled rubber structure with filler Young's moduli from 300 MPa (3E+8 Pa) to 3 GPa (3E+9 Pa). The variation of the filled rubber G′ and G″ is very little. This means the predicted filled rubber is insensitive to the filler modulus. However, since the FFT algorithm is sensitive to the contrast between filler and matrix materials, filler Young's modulus of 300 MPa is used in the present work.
Efficient Reduced Order Modeling for the Filled Rubber CompositeThe aforementioned FFT formulation can compute the effective property of filled rubber, but it requires the computation of all local responses at individual voxels and thus imposes a high computational cost, though much less when compared to the FEM. The recently proposed SCA method provides an alternative for computing effective properties of arbitrary microstructure, such as the filled rubber composite, at reasonable a computational cost. In this section, the SCA formulation is discussed, providing insight into the physically-based reduced order model.
SCA is a two-stage reduced-order modeling approach. In the offline stage, two steps are performed: 1) all voxel elements in the mesh are clustered based on arbitrary measurement of similarity in mechanical responses, such as strain concentration tensor ; 2) The interaction tensor, , for each pair of clusters is then computed. The offline stage will generate a material microstructure database which contains all interaction tensors between clusters pairs and volume fraction of each cluster. After the offline state, the original high fidelity RVE is compressed into a small number of clusters. In the online stage, discretized Lippmann-Schwinger will compute strain and complex stress in each clusters and RVE level averaged complex stress and strain at any given external loading conditions by solving a boundary value problem. Once the RVE complex stress is identified, tan(δ) will be computed accordingly. This process is concisely illustrated in
The Lippmann-Schwinger equation given in Eq. (5-3) can be reformulated in the following form in Eq. (5-10)
εMacro−ε(X)−∫Ω0(X,X′):[σ(X′)−0:ε(X′)]dX′=0,X∈Ω (5-10)
where X is the voxel element location in the mesh and X′ is the location in the reference domain.
To perform reduced order modeling of the filled rubber composite, one can reduce overall degrees of freedom in the mesh by grouping voxels with similar mechanical responses together. This process is also known as clustering. A convenient criterion chosen here is the well-known strain concentration tensor that connects macroscopic prescribed strain to local strain responses at each voxel, shown in Eq. (5-11) below:
ε(X)=(X):εMacro,X∈Ω (5-11)
where is a 6 by 6 matrix in Voigt notation. Six traction free loadings on the original RVE are needed in order to determine all 36 entries of . Clustering algorithms, such as k-means clustering, can be used to cluster all voxels and decompose the original domain of 19,737,675 voxels into K clusters, where K equals 64. This is the first step of the offline stage. One might think this as reducing total integration points to K, where K is a smaller number comparing to the total number of integration points in the original mesh. Note that is not the sole solution for the clustering process. For different problems, one might wish to use other meaningful quantities to identity voxels with similar mechanical responses, such as lattice orientation.
It is convenient to define a characteristic function as in Eq. (5-12) in order to decompose Eq. (5-10) to incorporate the newly decomposed domain.
where I=1,2,3, ,K. The discretized Lippmann-Schwinger equation is given in Eq. (13) for each cluster.
where cI is the volume fraction of cluster I and |Ω| is the total volume of the mesh.
By noticing that σ(X′) and ε(X′) can be written as:
After the first step of the offline stage process, which is the clustering process, IJ can be computed. Once IJ is computed, the second step of the clustering process is completed and the original RVE is compressed into a microstructural database made of clusters and interaction tensors. Plug in IJ into Eq. (5-13) will give the final form of the discretized Lippmann-Schwinger equation in Eq. (5-17):
where the incremental form is given as:
The online stage involves the evaluation process of Eq. (5-18). The solution procedure of Eq. (5-18) is given in Appendix B for readers' reference.
The above SCA formulation is combined with Eq. (5-9) to compute the effective complex moduli of filled rubber at a reduced computational cost due to the reduction of the number of integration points and degrees of freedom. The ROM of the original filled rubber domain with clusters is shown in
To further investigate interphase properties using our reduced order model approach, the above procedure is integrated into the inverse modeling process to better predict filled rubber properties.
With the master curves shown in
Inverse Modeling Formulation
The filled rubber model with the interphase can be created following the aforementioned voxel-wise search process, but the viscoelastic behavior of interphase is still unknown. The interphase is used to suppress the inconsistency between master curves from the FFT homogenization scheme and experimental data, thus its complex Young's or shear moduli at different ωk have to be computed. To predict the unknown interphase property with limited experimental data, a so-called inverse modeling scheme is introduced based on optimization techniques. The objective function of the inverse modeling process can be written as:
Above goal function states for each given ωk, the solution of interphase complex shear modulus G*′IP(ωk) is found when the difference of predicted complex shear modulus G*′PMC(G*′IP, ωk) and G*′EXP(ωk) is minimized. When ωk is fixed, it is possible to define a function for the root-finding process as:
f(G*′IP)=G*′PMC(G*′IP)−G*′EXP (5-20)
where ωk is omitted compared to Eq. (5-10) since the solution is found for each ωk of interest.
To solve for Eq. (5-20), an iterative method is used to find the solution of G*′IP(ωk). The derivative of Eq. (5-20) can be formulated as in Eq. (5-21) in order to apply Newton's iterative method:
With Eq. (5-20) and Eq. (5-21), it is possible to write the iterative process as Eq. (5-22):
where the superscript n denotes the current iteration number and n+1 denotes the next iteration. The initial guess for interphase properties is set to be the same as an unfilled rubber.
Inverse Modeling for the Filled Rubber Model with Interphase
The proposed inverse modeling method is applied to the aforementioned filled rubber domain to re-compute the filled rubber master curve. The mesh is the one used above, but with the added interphase. Interphase of thickness βIP=9.74 nm, which is equivalent to 6 voxels, is added to the domain to create the filled rubber model with the interphase. The updated filled rubber mesh with the interphase is shown on the left of
For the inverse modeling process, 17 frequency points are picked over the span of the entire master curve of filled rubber. More points can be used for the inverse modeling process but would increase computational cost. The experimental procedure of measuring the filled and unfilled rubber master curves have been reported above. Material properties for unfilled rubber and filler materials are also introduced above. The inverse modeling process is combined with the SCA online prediction to be the third step of the present scheme. At this point, the three-step prediction scheme for filled rubber is presented. The results of the inverse modeling will be presented and discussed in the next section.
Result and Discussion on Inverse Modeling ResultsThrough inverse modeling, a more accurate prediction of the filled rubber master curve is shown in
The comparison between the predicted G′ and G″ of the filled rubber and experimental results are shown in
Through inverse modeling, G′ and G″ of interphase are computed as well, as shown in panels (a)-(b) of
The difference observed between the filled rubber and unfilled rubber tan(δ) shown in
On the other hand,
Also, the peak tan(δ) of the filled rubber is lower than unfilled rubber. In order to increase tire traction at low temperatures, tan(δ) at lower temperatures should be increased. Based on the computed filled rubber properties, this can be achieved by identifying filler material that can form interphase with high loss modulus in the high-frequency range. Such a material combination would provide increased damping so the winter traction can be improved. The present workflow is suitable to inversely model necessary interphase storage and loss moduli that narrow down the domain for material selection.
In terms of inverse modeling results obtained from SCA,
In addition, a dramatic increase in computational efficiency was observed for the SCA prediction. The comparison of the computation time of inverse modeling at a single frequency point is shown in the Table 5-3 below, where a 1778 speed-up is achieved by applying the proposed ROM compared to FFT. A comparison of computational time in evaluating filled rubber properties by different methods is summarized in Table 5-4. Even though SCA requires an offline stage computation to generate the ROM of the filled rubber composite, which is still computationally expensive. However, once the database is computed, it can be used for all later evaluation procedure. This provides considerable savings in computational time in the online stage prediction of effective properties of the filled rubber, as well as the inverse modeling process. The SCA method, combined with inverse modeling, opens a new avenue towards material design. Moreover, the same sets of ROM can be used for various material properties to compute filled rubber mechanical behaviors at a reasonable cost. It is possible to explore the design space and get both a decent trend and quantitative description of filled rubber. More importantly, the proposed scheme provides an efficient solution towards investigating interphase properties of filled rubber materials for future design needs even with limited computational resource.
Despite the encouraging results observed using the inverse modeling scheme, necessary assumptions were made for this process to be possible. For example, the interphase thickness βIP was assumed to be in a circular region around the filler material. However, it is possible for the interphase thickness to be a function of the filler curvature or filler size since the degree of polymer chain aggregation can be affected by such parameters. The scheme will be extended to include varying interphase thickness around filler to consider geometrical effect.
In this exemplary study, an inverse modeling scheme is introduced and illustrated as an effort of quantitatively analyzing the interphase properties of a filled rubber compound using high fidelity reconstruction of the filled rubber sample. The FFT scheme enables efficient computation when the fine 3D digital image is used as input. The test data of unfilled and filled rubber provide enough inputs to solve an inverse modeling process for interphase properties at each frequency point. In addition, SCA, a reduced order modeling scheme, is combined with the inverse modeling procedure to compute interphase properties for the first time. Once the offline stage database is constructed, the database can be conveniently used at all frequency points to compute the whole filled rubber master curve. This novel reduced order modeling approach provides considerable savings in the computational cost. The consolidation of SCA and the inverse modeling scheme is an efficient and valuable filled rubber design tool. The present method is general enough and can incorporate other details of the microstructure, such as variation of interphase thickness. The obtained interphase property can enable forward computation of a three-phase filled rubber model in the time domain analysis, such as tensile testing. It is believed that the effect of interphase can yield better predictions of filled rubber responses under various loading conditions, and it shall be addressed in future work.
FFT Scheme Algorithm Flow ChartThe algorithm flow chart for the FFT scheme with fixed point iteration is concisely given as below. The convergence test is used to determine if the local strain εi+1 reached a stable value or not. The implementation can be easily done in any programming language, provided that FFT and inverse FFT packages are readily available.
Initialization:
ε0(X)=εMacro∀X∈V;
σ0(X)=(X):ε0(X),∀X∈V;
Iterate i+1 with εi and σi known;
-
- a)={circumflex over (σ)}i(σi);
- b) Convergence test;
Above algorithm flow chart is for single loading step. Readers can easily modify it to multiple loading steps by defining multiple εMacro for multiple loading steps.
Self-Consistent Clustering Online Analysis Solution ProcedureSCA requires solving the discretized Lipmann-Schwinger equation based on external loading condition, either the fixed strain increment ΔεMacro or the fixed stress increment ΔεMacro. The discretized Lipmann-Schwinger equation is shown as in
The solution to Eq. (5-23) would be strain tensor εI in each cluster. In order to use Newton's Raphson method to find a solution to Eq. (5-23), the residual form is given as in Eq. (5-24) below
For macro strain boundary condition, the residual of macroscopic strain is written as
For macro stress boundary condition, the residual becomes
Solving for ΔεI by minimizing residual rI. Linearizing rI with respect to Δεyields
where Jacobian Matrix
is
IJ=δIJ+:(−0),I=J=1,2,3, . . . ,k (5-28)
For macroscopic strain boundary condition, one has:
I(k+1)=−,(k+1)I=cI, and (k+1)(k+1)=0,I=1,2,3, . . . ,k (5-29)
For macroscopic stress boundary condition, one has:
I(k+1)=−,(k+1)I=cI, and (k+1)(k+1)=0,I=1,2,3, . . . ,k (5-30)
Solving Eq. (5-27) gives 6c1 that updates all local strain increment ΔεI. This process should be repeated until residuals in all clusters are minimized.
Example 6 Image-Based Multiscale Modeling System for Mechanical Performance of Metal Additive ManufacturingThe common assumptions and methods used for multiscale or microstructure-sensitive modeling of materials are generally not appropriate for capturing the performance of additively manufactured (AM) metal. Current approaches often rely upon a RVE or some other form of representative structure (e.g., a representative unit cell or simple periodic structure); prediction of response with these structures might also be predicated upon simplifying assumptions, such as idealization of the microstructure (e.g., as an ellipsoid), periodicity, or statistical (spatial) uniformity. Models that predict minimum performance or capture worst-case scenarios often struggle to make predictions that are quantitatively useful in process design—one of the main goals of structure-properties-performance modeling. These difficulties relate to an underlying challenge with AM: the localized process produces a mix of process-dependent and random microstructures, which do not follow either roughly deterministic (where worse-case analysis might be useful) or fully stochastic (where statistical uniformity would apply) patterns. The material that results from metal AM is heterogeneous, non-uniform, and highly variable.
In this example, a new paradigm is introduced in mechanical modeling of microstructure dependent failure wherein a fast reduced order approach is applied directly to experimentally imaged microstructures which populate a macroscale component; the behavior of these microstructures is used to predict the macroscale performance. Knowledge of the process history (either from modeling or experiments) can be used to select which microstrcture occurs a each material point in the component-level model. Thus spatial, build-to-build, and part-to-part variations can be captured. Specifically, this concept will be demonstrated with x-ray computed tomography images of voids constituting a database of hundreds of thousands of possible microstructures. At each material point in a component, the specific microstructure is chosen based on the results of a process model for AM. Finally, the load history during the component's expected service life is predicted, and used to estimate the fatigue life (or another performance indicator).
By using a database of microstructures and conducting a sampling study (viz Monte Carlo analysis), the need to define a representative volume element is alleviated, albeit at a computational expense. We can also enable prediction of performance variability and distribution by introducing some randomness into the selection of each microstructure. The choice of microstrcture at each material point in the component is based on processing history, making it possible capture the difference between, e.g., different toolpaths or choice of processing parameters. In the effect, this appears something like a digital twin, where a given build might be tracked through to a failure profile. One might alternatively think of this a “virtual experiment,” as each “specimen” tested computationally provides one data point in terms of performance (e.g fatigue life), much like a physical experiment.
The resulting framework has several unique features: location- and history-dependent properties and performance prediction, scalability to components or even systems (based on the macro-scale solution method employed), and ability to predict variability, in addition to mean/min/max, in behavior throughout the domain
This comes at a cost, naturally. A sufficiently rich database of images of microstructures ought to be used; “sufficiently rich” is ambiguous and depends on many factors. Some way to connect processing history to a suitably measure of microstructure that is both location/processing dependent, predictable, reliable, and relevant to the mechanical performance is also optimal.
Metal AM for use in structural applications where fatigue loading might occur is an excellent challenge with which to demonstrate this method. In AM, relatively unique, dispersed, heterogeneous microstructural features arise that depend upon a host of factors related to the conditions under which a part is fabricated. A truly predictive model can provide confidence in the robustness of designs and a quantifiable safety assessment, while minimizing the number of experiments required. However, this is only possible when using a model with sufficient descriptive capability.
Understanding the mechanical properties of AM metals has advanced rapidly in the past few years. Many of these advances have been experimental, and predictions of the properties of AM materials such as Ti-6Al-4V, SS316L, and Ni-based superalloys have been reported. One commonality between these materials is that they may widely vary point-to-point within a single component, between builds with different parameters, and between different builds on the same machine with the same conditions and parameters; the challenges associated with these various sources of variability have been noticed by previous computationalists. Experimental efforts have noted this variability, too; for example, Gong et al. and Sheridan et al. show significant variance between builds with different parameters and within builds with the same parameters, highlighting the importance of processing parameters, but also process-induced randomness. This kind of variability and randomness, particularly where defects are concerned, results in heterogeneous material for which standard prediction methods may be ill-suited.
Multiscale modeling is one approach that might be able to capture heterogeneous, non-uniform material responses. For example, Horstemeyer provides a case of fatigue modeling in heterogeneous materials using a hierarchical multiscale approach based on the multi-stage fatigue model developed. More recent works have focused on applying models throughout the processing and subsequent service life for additive manufacturing to relate processing, structure, and performance. These frameworks are valuable, and although not multiscale, and provide a starting place for the current work. Both are relatively deterministic, and critically both use process modeling to directly predict microstructure and defects. This implies complete reliance upon the accuracy and veracity of the process model to capture all important physics—for defects such as pores an accurate prediction remains an challenge within the AM modeling literature, particularly at the component scale. When fatigue response depends on the precise shape and location of each pore, reliance on a model might be unwise.
In the following, a sketch of the methods employed to build our multiscale model is made, including a thermal model of the AM process, an image- and modeling-based statistical description of voids, and the mechanical multiscale model used to predict performance. This is followed by a computational example, where prediction of the strain-life behavior of a sample of specimens built of Inconel 718 with several different process parameters is demonstrated.
MethodologyConceptually, this multi-physics, multiscale method is composed of three major parts. The first two are used to generate the required information that describes the heterogeneity within the materials, and last uses this information to make mechanical performance predictions. As such, the first two are what relates the method to AM metals; the third part is the primary contribution and is not necessarily restricted to any particular material system.
The first part involves thermal modeling of the build process, so that the influence of processing parameters is captured. The second part uses synchrotron x-ray computed tomography images of voids in a prior AM build to map, based on the processing information in the first part, possible microstrctures to the component. A Monte Carlo type approach is used to generate many possible instantiations to account for random fluctuations within the same processing conditions (measured as scatter in the processing-structure relationship). The third part is a concurrent multiscale stress analysis that uses the instantiations from the second part and computational crystal plasticity to estimate the fatigue potency of voids on the micron-scale throughout a centimeter-scale component. A standard test specimen will be used as a consistent example throughout, although the method is widely applicable.
Model Setup
The system described here is applicable to the same geometries, choices of processing conditions, boundary conditions during mechanical load, and materials that can be represented in standard finite elements analysis. Sufficient experimental data, especially 3D images of defects as will be described later, are required for the material of interest.
MacroscaleThermal Modeling for the AM Build Process
We start by conducting a thermal analysis to model building the component of interest using the directed energy deposition (DED) method. This analysis is done using a transient thermal Finite Element solver. The governing heat transfer energy balance to be solved is:
where ρ is the material density, cp is the specific heat, t is the time, xi are the spatial coordinates, k is the conductivity of the material, T is the temperature, and Q represents the heat source.
This heat source is represented by a moving laser described by the Gaussian distribution:
where P is the power of the laser, η is an absorptivity factor to limit the amount of energy absorbed by the material from the laser which was taken to be 30%, and Rb is the radius of the laser. The variables x, y, and z are local coordinates of the laser. Heat loss on the dynamic free surfaces of the model is simulated though a combination of convection and radiation. Convective heat loss is defined by
qconv=hc(T−T∞) (6-3)
where hc is a convection coefficient, T is the surface temperatures, and T∞ is the far-field (ambient) temperature. Radiation heat loss is defined using the Stefan-Boltzmann law, given by
qrad=σsε(T4−T∞) (6-4)
where σ is the Stefan-Boltzmann constant and ε is the surface emissivity of the material.
Particular build parameters, including laser speed and power and the toolpath are selected. The material must also be specified. With this information given, the model can predict the time-temperature-history of each point within the part. The solidification cooling rate (SCR) is calculated based upon the temperature history of the thermal model and outputted at each node. This is approximated, according to Eq. (6-5), as the time it takes for a material point, represented by subscript i, to reach the solidus temperature from the liquidus temperature. In order to capture this solidification behavior Eq. (6-1) is solved explicitly with an approximate timestep of 9.0×10−4 s. If too large of a timestep is chosen, one may skip over the solidification behavior at some material points. Additionally, in the case of re-melting, only the final cooling stage is considered.
where SCR(XM) is the solidification cooling rate as a function of the macroscale spatial coordinates XM within the macroscale domain ΩM, TX
Relate Thermal Conditions and Defects
To use the thermal prediction, we need to connect thermal model outputs to defect statistics and geometry. We choose to generate this relationship in two stages: first through a relationship between a thermal descriptor (e.g., SCR, as computed above) and microstructure descriptor statistic, then through a database of microstructural geometries which correspond to each microstructural descriptor (e.g., voids size) statistic. The microstructures from this database is then used to populate each realization of the part. This defect estimation and database building process is outlined in
In order to identify the SCR-to-void relationship and build a database, we used two Inconel DED single-track thin wall parts built with processing parameters corresponding to parameter set 1 in Table 6-2. Both walls used a vertical zig-zag toolpath pattern, but while one wall's toolpath was continuous, the other added a one-minute dwell between each layer.
X-ray tomography imaging experiments were performed at Beamline 2-BM at the Advanced Photon Source, Argonne National Laboratory on specimens extracted from 22 locations on the two thin walls (total 44 data points). Each image was of about 1 mm3 of material with voxel edge length 0.65 μm. Contrast from x-ray absorptivity was used to distinguish between voids and material using a series of processing steps including filtering, thresholding, and artifact removal. Eleven different descriptive statistics were extracted from these images, such as void location, size, shape, orientation, and n-nearest neighbor information. For simplicity, we will focus here on the void size, as represented by the single-point correlation statistic: void volume fraction Vf. The overall Vf was computed from the sum of the voids sizes throughout the image for each location.
The build process of these two thin walls is modelled using the thermal analysis method outlined in above. The parts' thermal history is summarized as a point-wise SCR throughout the build. The SCR is a useful, physically relevant single-point statistic that summarizes the thermal conditions that result from the choice of building parameters. The average SCR at the location of each of the x-ray images was computed using this processing model. Note that experimental measurements of the cooling rate, e.g., with an infrared (IR) camera, could be used to provide equivalent data. The first part of
Vfrac=Ae−(B)(SCR) (6-6)
is fit to the data. The fit parameters are A=0.0047 and B=0.0011.
Build Image Database
Next, this relationship is used to determine a range of possible microstructures that might occur within the build. Subsets of the images used in the first part were exhumed, such that the Vf range of the subsets corresponds to the range estimated to occur in a part as given by the relationship shown in
To complete the database, the three training steps outlined in Sect. 2.4.1 were conducted on each entry. Thus, the final database used for the multiscale mechanical response prediction contains proto-data used to generate response predictions for arbitrary loading, which depends upon microstructural information.
Mechanical Response Prediction
In the previous two sections, we have used a database of experimental data to populate the part with a realistic, heterogeneous distribution of voids that correspond to the processing conditions used to build the part. The next step is to use this information to predict the mechanical properties of the part. However, for the same reason that it was computationally infeasible to directly predict the defect structure, it would be infeasible to directly model the mechanical response of the microstructures we have generated.
In order to capture the microstructural information required to predict mechanical performance of AM materials, multiscale approach is taken. At the macroscale, a simple Johnson-Cook material model is used to provide the strain boundary conditions (more specifically, deformation gradients are used) to the data-driven reduced order model used at the microscale. At the microscale, crystal plasticity is used to predict the material behavior. Lacking more complete data, we simply assume that any microstructure with Vf within ±10% of that specified by Eq. (6-6) has equal probability of occurring. In this way, the part is described by essentially a Monte Carlo process, where the possible states of the random variable is controlled by the process model. A schematic of this process is shown in
Microscale Reduced Order Model with Crystal Plasticity
At the microscale, the modeling approach combines data-driven micromechanics with computational crystal plasticity, termed crystal plasticity self-consistent clustering analysis (CPSCA). The method is derived from first order homogenization, the Hill-Mandel condition, and local equilibrium. This problem is defined concisely in a finite deformation as:
where P is the first Piola-Kirkchoff stress, u is the displacement at point X in domain Ω, F is the displacement gradient, and F0 is the remove (applied) deformation.
To satisfy the Hill-Mandel condition we assume a periodic displacement field within the microscale domain and anti-periodic boundary traction. Under this assumption, the boundary value problem given in Eq. (6-7) has been shown to be equivalent to the Lippmann-Schwinger equation, and approximated clusterwise as
FI+ΣJ=1N
where the domain has been discretized into a fixed, finite number of clusters Nc, C0 is a reference stiffness, and * denotes convolution, and the tensor DIJ describes the interaction between clusters I and J, given as
where ΩI is the domain in cluster I, Γ0 is a periodic fourth order Green's operator, and χ is the characteristic function. For a detailed derivation.
CPSCA solves this equation in two stages. The first “training” stage includes three parts: data collection, data compression (or clustering), and computation of the interaction tensor. The resulting interaction tensor can be stored for future use. The second “prediction” stage makes use of the interaction tensor and solves the integral equation given in Eq. (6-8) subject to an applied strain state and material law.
This first stage is conducted for each of the subset volume, adding the clustering and interaction tensor data to the database of image subsets.
Thus, after a part is instantiated and subsets used to populate the microscale, only the second stage has to be used compute the mechanical response. The second stage can then be run any number of times, with independent stress-strain histories and boundary conditions. In the second stage, cyclic loading is applied, and the stresses are computed with a crystal plasticity (CP) material law. The applied deformation gradient is decomposed into an elastic and plastic part; the plastic part of the deformation gradient is computed from the plastic velocity gradient, which itself is determined by summing the plastic shear velocity across slip systems in the intermediate configuration, as:
{tilde over (L)}p=Σα=1N
where ⊗ is the dyadic product, α is a slip system, Nslip is the number of slip systems, {dot over (γ)}(α) is the microscale shear rate, s(α) is the slip direction, and n(α) is the slip plane normal. The phenomenological power-law with backstress shown in Eq. (6-11) is used to update the shear slip rate.
where τ(α) is the resolved shear stress (computed with τ(α)=σ:(s(α)⊗n(α)) on slip system α, {dot over (γ)}0 is a reference shear rate, τ0(α) is a reference shear stress, a(α) is a backstress term, and m is a “rate hardening” coefficient.
Example Performance Measure: Fatigue Life
Failure prediction encompasses a range of damage mechanisms, which depend on predictions of properties. One of the more challenging failure to attempt to predict is that caused by cyclic loading: fatigue. Thus, we demonstrate this method with an application to measuring fatigue performance. Specifically, to predict fatigue crack incubation life, a fatigue indicating parameter (FIP) derived from the critical plane approach is used to estimate the fatigue incubation life of the microstructure, given a plastic strain history. The FIP is defined by
where Δγmaxp is the maximum cyclic plastic shear strain, σnmax is the peak stress normal to the plane on which ΔγmaxP occurs, σy is the yield stress, and K is a normal stress factor assumed to be 0.55. The FIP is related to number of incubation cycles using:
NFIPmax=
where
Calibration/Validation Case with Standard Fatigue Specimen
To demonstrate the method, a fatigue specimen conforming to the ASTM E606/E606M/E466 standard geometry is numerically tested. The specimen geometry was meshed with two different hexahedral meshes, for one thermal and one stress analysis, as shown in
Macroscale homogeneous material properties corresponding to Inconel 718 (IN718) were applied to both the thermal and macroscale stress analyzes. These thermal properties for IN718 are summarized in Table 6-1. The Johnson-Cook parameters were used, although this choice is essentially arbitrary—under the fatigue loading specified, the macroscale response was entirely elastic.
For the thermal analysis, a zig-zag tool path with 90 layer-by-layer offset was selected, with the four different process parameters specified in Table 6-2. The part was meshed and simulated with the gauge section aligned normal to the build direction. A relatively fine mesh with 539,216 hexahedral elements (panel (b) of
The CP model was used at the microscale. The parameters were calibrated by minimizing the difference between many runs of a cubic domain including 64 cubic grains and a set of baseline tensile and cyclic loading data for AM IN718, with starting conditions taken for m and {dot over (γ)}1 and for elastic moduli. The resulting the model parameters are given in Table 6-3. The parameters that relate FIP to fatigue life are fit to experimental high cycle fatigue data of IN718 collected from literature. We acknowledge that this generally represents relatively defect-free material, and may not be perfect for AM material; however, fatigue data for AM IN718 is relatively scant, and producing such data was outside the scope of this work.
The imaged voids identified by the method above were assumed to be embedded within a single crystal oriented so that the fastest growth direction was aligned with the build direction. This is consistent with experimental experience which suggests that grains in IN718 are much larger than the voids and preferentially orientated. Future work could use method, such as the cellular automata approach to predict grains throughout a build. Another possibility would be to synthetically generate grain structures from given experimental evidence which provides a statistical basis for the synthetic generation. Preliminary, as-yet-unpublished results indicate that grains generated this way could substantially impact predicted fatigue lives when used in concert with image-based void geometries.
A snapshot of the thermal response for the part being built under one of the processing parameter sets is shown as contours of temperature in
We simulated thirty different virtual test specimens, each with a different, random distribution of defects, and thus estimated fatigue life. This is shown in
This microstructures used here do not explicitly capture surface effects for the microstructural fatigue behavior. For example, a void on the microscale near surface of the entire part might have an impact of fatigue performance. While the macroscale effects of the boundaries are naturally included, this small scale interaction is a matter of ongoing work. Some authors suggest that there is limited change in overall fatigue life, up to the high-cycle limit (runout), for as-built versus machined finish specimens; this may indicate the assumption made here is reasonable.
A simplifying assumption for the grain structure was also used for this demonstration; however, we could add a step that either predicts the grain structure from thermal history, as was demonstrated, or derives a grain structure from experiments should that data be available. Currently, the kind of 3D grains structures needed to include image-based grains are not available to us, although one might consider using statistically-similar, synthetically generated grain structures based on currently available images the next logical step in purely image-based microstructures.
An implicit assumption of the images used here is that the track spacing was appropriate to avoid lack-of-fusion defects between tracks. This is because the images used to make the database came from single-track, thin-wall build, were no between-track porosity would be possible. This assumption could easily be relaxed by including images of voids in multi-track builds in the database.
Development of a benchmark for fatigue prediction in AM would support these modeling efforts. Currently, conflicting reports of the influence of AM, versus conventional processing, on the fatigue properties of metals exist. This can in part be attributed to the wide range of materials, but also to a range of build processes and choices made by machine operators. Without a better, AM specific, standard test procedure such conflicts will likely persist, to the detriment of the modeling community who lack calibration and validation data.
This example presents a method that exploits computational efficient micromechanics techniques to perform Monte Carlo-style numerical experiments. The specific microscale solutions are derived from a cluster-based solution of the Lippmann-Schwinger equation, and involve the prediction of fatigue life using crystal plasticity and a fatigue indicating parameter. A database of possible microscale geometries is developed from 3D imaging experiments. These geometries are related to AM processing conditions through the solidification cooling rate, and a process- and microstructure dependent, stochastic prediction of fatigue life is achieved with reasonable computational expense.
Example 7 Efficient Multiscale Modeling for Woven Composites Based on Self-Consistent Clustering AnalysisIn this exemplary embodiment, the curse of computational cost in woven RVE problem is countered using the SCA, which maintains a considerable accuracy compared with the standard FEM. The Hill anisotropic yield surface is predicted efficiently using the woven SCA, which can accelerate the microstructure optimization and design of woven composites. Moreover, a two-scale FEM×SCA modeling framework is proposed for woven composites structure. Based on this framework, the complex behavior of the composite structures in macroscale can be predicted using microscale properties. Additionally, macroscale and mesoscale physical fields are captured simultaneously, which are hard, if not impossible, to observe using experimental methods. This will expedite the deformation mechanism investigation of composites. A numerical study is carried out for T-shaped hooking structure under cycle loading to illustrate these advantages.
Woven composites are widely used in industries such as aerospace and automotive because of their excellent mechanical performances as compared to unidirectional laminated composites. However, performing structural analysis of woven composites is challenging due to the mesoscale and microscale heterogeneities (see
Multiscale simulation provides a powerful method for analyzing both the material microstructure and macrostructure. Using this method allows the macroscale performance of woven composites can be predicted based on the properties of the constituents. Once the microstructure is characterized, macrostructural experiments are not needed every time the microstructure is changed. This allows the multiscale method to accelerate material design of woven composites, reduce the cost, and improve the analysis accuracy of woven composite structures. Moreover, the physical fields in different scales can also be captured, which are hard, or sometimes impossible, to observe using experimental method. Accomplishing effective multiscale simulations for woven composites still involves some challenges.
The first challenge is to find an efficient woven RVE solution. Effective macroscale properties are homogenized properties of composites, which are always adopted for the material selection and structural design with woven composites. To predict these effective properties, an RVE for the woven composite material must be developed, which will establish the link between the microstructural features and effective macrostructural properties. In the case of a periodic woven architecture, a unit cell is used for the RVE. For the microstructure design, the woven RVE solution can be integrated into an optimization algorithm in which the RVE has to be solved repeatedly to find the optimized solution and satisfy the requirement of objective effective properties. Therefore, solving the woven RVE problem efficiently can accelerate the whole process of optimization. As a result, it will promote the microstructure design of woven composites. Currently, several approaches have been proposed for solving the RVE problem. The analytical approaches, such as mixtures rules and theoretical micromechanics methods are efficient, but will lose accuracy in the case of complex microstructure and nonlinear, history-dependent material laws. The Direct Numerical Simulation (DNS) method, such as FEM, is extremely time consuming. The FFT-based method is more efficient than FEM, but encounters convergence problems for the high phase contrast in nonlinear problems. The Transformation Field Analysis (TFA), the Nonuniform Transformation Field Analysis (NTFA) and Proper Orthogonal Decomposition (POD) are other solution methods, but they require extensive a priori simulations to obtain deformation modes, especially for nonlinear phase behavior.
The second challenge is concurrent multiscale simulation for woven structures. The behavior of woven composite structures is predicted using the behavior of the RVEs through the concurrent multiscale simulation. Additionally, the physical fields in different involved scales can be captured simultaneously, which will expedite the deformation mechanism investigation of woven composite structures. Concurrent simulation requires numerous RVE solutions, which is computationally expensive using the FE2 framework, as shown in
Solving these challenges require improving the solution efficiency of the RVE problem. The SCA is an effective and efficient method to solve the RVE problem, which can be used for complex woven architecture undergoing irreversible processes, such as inelastic deformation. This makes it particularly attractive for integration into a multiscale simulation. The SCA method involves a two-stage process, an offline stage and an online stage. In the offline stage, a clustering algorithm is used to reduce the overall degrees of freedom (DOF) of the RVE, resulting in a reduced order RVE. In the online stage, the reduced order RVE is utilized for solving the discrete incremental Lippmann-Schwinger integral equation to obtain the stress and strain fields in the reduced order RVE. This efficient method has been used for simulation for 2-dimensional (2D), two-phase composites, and 3-dimensional (3D), hard inclusion material considering nonlinear, elastoplastic damage softening effect and computation for polycrystal material. These simulations have demonstrated good efficiency and accuracy.
In this exemplary example, the reduced order modeling process of woven composites by SCA is discussed and the results are compared with FEM. Moreover, the multiscale framework of woven composites is presented for a woven composite. Based on this framework, the part scale mechanical response, whether linear or nonlinear, can be predicted efficiently only using the fiber material and matrix material laws.
Methodology and FrameworkSCA Method for a Woven RVE at the Mesoscale Level
A woven composite material is constructed by interweaving yarns in two directions and then filling the weave with an epoxy matrix material. The effective elastic properties of an individual yarn are predicted using a UD RVE based on the constituent properties of the fiber and matrix materials (see
FEM×SCA Concurrent Multiscale Framework
Two scales, the macroscale and mesoscale, are utilized in the concurrent multiscale framework in the study (
The load is applied to the structural scale model. At each integration point in the macroscale elements, the strain increment will be passed from the FEM model in the SCA model. This strain increment is applied to the corresponding woven RVE, and the SCA method is used to solve the woven RVE problem and return the stress increment to the FEM solver. The algorithm for the the concurrent multiscale simulation of woven composite structures is summarized as follows.
-
- 1. Mesh the macroscale woven composites part using FEM;
- 2. Begin solution increments;
- 3. Compute integration point field variable from nodal values;
- 4. for i=1, N_IP (Loop over integration points);
- a. The macroscale strain increment ΔE is passed to user-defined subroutine as input data;
- b. Run online part of SCA to solve woven RVE subjected to ΔE using offline woven database;
- c. Compute the mesoscale strain increment Δe in every cluster in woven RVE domain;
- for j=1, N_CLU (Loop over all clusters in this corresponding woven RVE);
- Compute mesoscale stress increment Δσ using corresponding material model;
- end for (Obtain the response at all clusters);
- d. Check convergence of the reduced-order discrete incremental Lippmann-Schwinger integral equation, if not, update Δe using Newton-Raphson method and go to c, if yes, go to e;
- e. Compute macroscale stress increment ΔΣ by averaging Δσ in woven RVE domain and pass the macroscale stress back to the FEM solver;
- 5. end for (Obtain the response at all integration points); and
- 6. Check convergence of the FEM part, if not, update nodal values and go to step 3.
From the flowchart, the SCA online algorithm can be implemented by the user-defined subroutine, which can be integrated with most commercial FEM software packages. In this way, the FEM×SCA multiscale framework can be adapted to arbitrary structural geometry and arbitrary woven architecture. Note that the cluster geometry is not required to be regular, which makes it effective for complex microstructure.
Verification of SCA for Woven RVEGeometry Model
In the family of woven composites, plain weave composites are widely used for ease of manufacturing. A plain weave composite is selected as an example to demonstrate and verify the SCA method at the RVE level.
The cross sectional shape and longitudinal shape are respectively modeled as
where a is long axis, b is short axis of the elliptical cross section respectively, A is amplitude of the sine function in Eq. (7-1). The three-dimensional geometric model of the woven RVE is defined by five parameters, which are also shown in
Material Properties and Constitutive Model
A nonlinear epoxy plastic material model is used to model the polymer matrix. The yield function is written as
ƒ(σ,σc,σt)=6J2+2I1(σc−σt)−2σcσt (7-2)
where σ is Cauchy stress tensor, I1=tr(σ) is the first invariant of Cauchy stress tensor,
is the second invariant of deviatoric stress
σt and σc are yield strengths in tension and compression. A non-associative flow rule is used, with the plastic potential function written as
g(σ,σc,σt)=6J2+2αI1(σc−σt)−2σcσt (7-3)
where
νplas is known as plastic Poisson's ratio. Thus, the flow rule is given by
where γ& represents the time derivative of the plastic multiplier. The evolution of yield strengths in tension and compression are written as
σt=σt
σc=σc
where σt
A transversely isotropic elastic material model is considered for the fibers. In addition, the elastic properties are list in Table 7-2. The fiber volume fraction is assumed to be 60%. The subscripts 1, 2 and 3 indicate the local material orientation (see
The effective material properties of the yarn are predicted using a UD RVE (
Clustering Process for the Woven Mesoscale RVE
The matrix material is isotropic, which requires that the clustering only be conducted once, based on the Am tensor. After this procedure, the material points with the most similar Am tensor will be grouped into the same clusters.
The clustering process for the yarn material will be more complex, as illustrated in
Results and Discussion
Uniaxial tension and pure shear responses are calculated using the SCA method.
Macroscale Anisotropic Yield Surface Prediction
A nonlinear epoxy elastic-plastic material law is considered for the matrix, which results in the overall elastic-plastic behavior for woven RVE. The yield stress of the elastic-plastic material is an important property for material selection and design of composite structures. A yield surface is developed to evaluate material yielding under various loading conditions. The anisotropic Hill yield criterion is considered in this example for woven composites. The homogenized material law can be efficiently predicted using SCA based on the epoxy elastic-plastic material law for the matrix and the elastic material law for the yarn. The quadratic Hill yield criterion has the following form:
F(σyy−σzz)2+G(σzz−σxx)2+H(σxx−σyy)2+2Lσyz2+2Mσzx2+2Nσxy2=1 (7-6)
where F, G, H, L, M, N are constants characteristic of the yield surface, which are traditionally determined by burdensome experiments. Additionally, some experiments are difficult to perform, such as the out-of-plane tension test. In this exemplary example, these parameters are predicted using the SCA method, which significantly reduces the computational cost and improves the efficiency. If Yxx, Yyy, Yzz are the tensile yield stresses in the principal anisotropic direction, it can be shown that:
If Yyz, Yzx, Yxy are the yield stresses in shear with respect to the principal axes of anisotropy, then
By taking advantage of the symmetrical features of the woven RVE, only four orthogonal loading conditions are applied to the RVE; the responses are calculated using the SCA method. The tangent stiffness is computed at each point from the stress-strain response, and the yield points are identified by evaluating the change in the tangent stiffness. As a result, the values of yield stress in six directions are obtained. In addition, the Hill constants in Eq. (7-6) are calculated using Eq. (7-7) and Eq. (7-8) using the values of yield stress. The six-dimensional yield surface descripted by Eq. (7-6) can be difficult to visualize, but by selecting three components at a time (and setting other three components to be zero), this six-dimensional yield surface is plotted in three-dimensional space, see
For the nonlinear computation of woven composite structures, this anisotropic Hill yield surface can be used as a criterion to instantaneously identify the onset of the plastic deformation under various loading conditions.
The present workflow shown in
Multiscale Simulation Convergence Study
An RVE convergence study is first conducted to quantify the effect of RVE size on the stress-strain response (
T-Shaped Hooking Structure Analysis
Woven composites are generally made of multiple layers for industrial application. A T-shaped hooking structure is a common geometry for connecting different composite parts. In this example, multiscale simulation is used to capture the macroscale and mesoscale fields in different layers during cyclic bending of the T-shaped hooking structure. The structure and the loading condition are depicted in
For woven mesoscale RVE, 64 clusters in the matrix and 32 clusters in the yarns are considered for the SCA calculation, while the eight-node continuum brick element with a reduced integration (ABAQUS element C3D8R) element is used for the FEM calculation. The macroscale behavior is determined by the microstructural morphologies and the mesoscale constitutive equation of each cluster. The SCA material database is first generated during the offline stage, which makes the multiscale simulation more efficient.
This numerical study is implemented with an ABAQUS VUMAT User Subroutine and the discrete incremental Lippman-Schwinger equations are solved using Intel Math Kernel Library (MKL) FORTRAN codes. This numerical example is run on Intel® Xeon® processor with 48 cores and 128 GB memory
The computational results are presented in
The above numerical study presents the advantages using the proposed multiscale simulation framework. The stress and strain fields can be captured in both macroscale and mesoscale, including the nonlinear effects, which are difficult to observe using experimental technology. As a result, this framework establishes the connection between the microstructure and macroscale response of the composites structure. When the woven microstructure is modified, but the yarn and matrix material remain unchanged, no additional experiments are needed to calibrate the constitutive equations; only the SCA offline database needs to be updated. In this way, it reduces the cost and improves the efficiency to find the optimal microstructure for the specific structures. Given the efficiency of the SCA online, the larger dimensional composites structures can be analyzed using this framework.
In this exemplary example, a woven composite multiscale modeling framework based on Self-consistent Clustering Analysis (SCA) is established. A two-stage reduced order modeling for woven composite, represented as RVE, is developed: In the offline data compression stage utilized clustering technique to reduce the overall degrees of freedom in the RVE domain as material points with similar mechanical responses are grouped into clusters. An interaction tensor linking different clusters is computed afterwards, generating a woven RVE microstructure database; In the online stage, Newton-Raphson iteration solves the reduced-order discrete incremental Lippmann-Schwinger integral equation. It exhibits rapid convergence for both linear and nonlinear material laws.
The woven multiscale modeling approach provides two attractive features: 1) Given the woven microstructure, online stage can utilize different materials laws for matrix and yarn phases to compute woven microstructure responses. For example, for temperature dependent material properties, the woven behavior at different operation temperature can be computed efficiently. 2) Given the same constituents properties, one only needs to update the offline database to incorporate different wave structures, such as plain weave, twill weave, or satin weave.
The woven multiscale modeling framework has various potential application, where two important applications are illustrated in the present study:
Rapid yield surface generation for woven design against yielding. The yield surface generation workflow can be used to investigate whether the woven composite would be free of plastic deformation under possible loading conditions. Note that it can be easily extended to failure surface for woven design against failure.
FEM×SCA woven laminate modeling framework which captures macroscale (FEM mesh) and mesoscale mechanical behavior simultaneously during the analysis. The mesoscale field evolution can be tracked as the load increases and the bridge between microstructure and macro-response is built. Based on the FEM×SCA framework, the damage and failure model can be built in the mesoscale and take more mechanisms into consideration, even conducting composites structure level failure analysis. Compared to the traditional phenomenological constitutive relations, these mesoscale-mechanism-based constitutive relations do not need complex mathematical formulas and numerous parameters. In addition, this framework can be extended to larger scale structure level analysis with complex loading conditions. Finally, this work provides an efficient methodology and framework to solve the woven composites multiscale problems.
Example 8 Self-Consistent Clustering Analysis for Multiscale Modeling at Finite StrainsAccurate and efficient modeling of microstructural interaction and evolution for prediction of the macroscopic behavior of materials is important for material design and manufacturing process control. This study approaches this challenge with a reduced-order method SCA. It is reformulated for general elasto-viscoplastic materials under large deformation. The accuracy and efficiency for predicting overall mechanical response of polycrystalline materials is demonstrated with a comparison to traditional full-field solution methods such as finite element analysis and the fast Fourier transform. It is shown that the reduced-order method enables fast prediction of microstructure-property relationships with quantified variation. The utility of the method is demonstrated by conducting a concurrent multiscale simulation of a large-deformation manufacturing process with sub-grain spatial resolution while maintaining reasonable computational expense. This method could be used for microstructure-sensitive properties design as well as process parameters optimization.
The process-microstructure-property chain relationship plays an important role in the development of new materials, particularly in the practice of computational material design or integrated computational materials engineering (ICME), which relies heavily on microstructure-based models. These models, once calibrated, can be used to explore a larger material design space than traditional trial-and-error methods. Classical micromechanics include analytical models, for instance, the self-consistent method and the Mori-Tonaka method. These models are efficient but require stringent idealizing assumptions of microstructure morphologies and/or interactions. The last decades have seen a rise in detailed modeling of manufacturing process, microstructure evolution and resulting mechanical properties supported by the development of computational mechanics as well as powerful computers. However, the speed of ICME deployment is still limited by the computational complexity of the models involved. Therefore there is substantial interest in data-driven reduced-order models, which have the promise of providing high accuracy without the computational expense associated with the detailed models used heretofore.
These reduced-order models have two important features: (1) degrees of freedom (DoFs) are significantly reduced; and (2) some data are precalculated during offline stage so that it can be repetitively used for iterations during online stage. One example is the transformation field analysis (TFA) method. It decomposes local deformation into elastic deformation and transformation deformation (or inelastic deformation). The elastic deformation is determined by precalculated elastic strain concentration tensor. The transformation deformation is assumed to be uniform in each material phase. In this way, the DoFs are reduced to the order of the number of phases involved in a specific problem. Thus this method is faster than traditional full-field approaches, but fails to accurately capture intraphase heterogeneous mechanical response. Nonuniform transformation field analysis (NTFA) alleviates this problem by interpolating pre-calculated transformation field data (or modes) under some predefined loading paths; this achieves higher accuracy than TFA with DoFs on the order of the number of modes used. Other data-driven methods employ similar ideas to reduce DoFs, for example proper orthogonal decomposition (POD) interpolates the displacement field and calculates the modes more efficiently with single value decomposition. However, these methods share the same restrictions: many expensive offline calculations are needed to obtain representative modes for highly nonlinear material behavior such as plasticity and finite strain.
Recently, the SCA proposed by Liu et al. has been shown to maintain high accuracy and efficiency even with these more challenging loading conditions. To do this, SCA uses a clustering-based data compression technique for order reduction and a self-consistent iterative scheme to solve the Lippmann-Schwinger equation accurately. Some recent developments of this method include theoretical analysis of convergence of SCA, applications in toughness design of particle reinforced composites, damage process of elasto-plastic strain softening materials, and fatigue life prediction for a NiTi alloy. However, most of these studies are limited to small strain problems and only consider interacting microstructures with a relatively small number of features, such as homogeneous matrix, inclusions and/or voids. The finite strain SCA was used to model inclusion breakage during the drawing process. In practice, the constituents in the matrix phase should also be considered if their characteristic length is comparable to inclusions and voids. For instance, metallic alloys are composed of grains with various morphology, size, and lattice orientation as well as possible defects such as precipitates or voids. Although the self-consistent and eigen-deformation based methods have been used to model interacting polycrystals, their limitations, as outlined above, remain. We recently coupled a small-strain SCA formulation with a crystal plasticity model, but it neglects the rotation components of the deformation and thus is not suited to finite strains.
This exemplary study tackles the above limitations by generalizing SCA to the finite strain case and demonstrates its accuracy and efficiency for predicting macroscopic mechanical response of heterogeneous elasto-viscoplastic materials, e.g., polycrystalline materials. Enabled by this approach, two case studies that would involve impractically vast computational time otherwise are presented for a Titanium alloy: the first one quantifying a microstructure-property relationship with uncertainty, and the second one predicting texture evolution of a thick plate during rolling through concurrent multiscale simulation.
The following sections discuss the data-driven mechanistic SCA method for finite-strain MVE problems, the accuracy and efficiency of SCA under finite strains and compared to reference solutions using the finite element method (FEM) and the FFT method. The algorithm, implementation, and procedures used to generate our results are also given.
Finite-Strain Self-Consistent Clustering AnalysisMicrostructural Volume Element Problem
The equilibrium mechanical response of a MVE under far-field macroscopic loading can be described by a set of equations formulated in the undeformed configuration:
In these equations, P is the first Piola-Kirchhoff stress (PK1 stress), F is the deformation gradient, u is the displacement, X is a material point, and Ω is the MVE domain. Pure deformation type far-field loading F0 is assumed here for simplicity.
Cluster-Based Lippmann-Schwinger Equation
Under the assumption of periodic boundary conditions, the MVE problem given by Eq. (8-1) is equivalent to the Lippmann-Schwinger equation
F(X)+Γ0*(P(X)−C0:F(X))−F0=0,∀X∈Ω (8-2)
where Γ0 is the 4th order Green's operator associated with an arbitrary reference stiffness tensor C0 and * denotes convolution operation defined by
Γ0*(P−C0:F)=∫ΩΓ0(X−X′):(P(X′)−C0:F(X′))dΩ(X′). (8-3)
To solve Eq. (8-2) numerically, a MVE domain decomposition is necessary. Unlike traditional numerical methods such as FEM and FFT, which do this by defining a fine (relative to the minimum feature size) mesh, SCA employs a clustering-based domain decomposition method to be introduced. Here it is assumed that the MVE is decomposed into Nc non-overlapping sub-domains, called clusters hereafter. For the Jth cluster ΩJ, J=1, . . . , Nc, the characteristic function is defined as
Using characteristic functions, we approximate the deformation gradient and stress as
F(X)≈ΣJ=1N
where FJ is the average deformation gradient and PJ is the average stress in the Jth cluster. Thus Eq. (8-6) can be approximated as
ΣJ=1N
Multiplying both sides of Eq. (6) with χI, I=1, . . . Nc, and integrating in Ω gives
∫ΩχI(ΣJ=1N
The cluster-based Lippmann-Schwinger equation is obtained by simplifying the above equation:
FI+ΣJ=1N
where DIJ is the interaction tensor between the Ith cluster and Jth cluster given by
Here |ΩI| is the volume of the Ith cluster.
SCA solves the cluster-based Lippmann-Schwinger equation in two stages. In the offline stage, the deformation concentration tensor field (known as the strain concentration tensor under the small strain approximation) is prepared and used to determine the clusters that define the regions ΩJ,J=1, . . . , Nc (domain decomposition), then the interaction tensors among these clusters are calculated. These data will be used in the online stage to solve Eq. (8-8) together with local material laws.
The application of the cluster-wise approximation made in Eq. (8-5) results in the loss of deformation compatibility. This means that although the solution of the continuous Lippmann-Schwinger equation, Eq. (8-2), is independent of the choice of reference stiffness tensor C0, the cluster-based approximation in Eq. (8-8) is not. To achieve higher accuracy, a self-consistent iterative scheme has been proposed to update C0 at each loading increment so that it approximates the effective tangent stiffness Ceff of the MVE. However, this necessitates updating the interaction tensor during the self-consistent iterative scheme, which would increase computation time in the online stage. Fortunately, most of the interaction tensor calculation effort can be done in the offline stage by enforcing C0 to be isotropi. The formulation of Ceff and its isotropic approximation are given below.
Offline Stage: Micromechanical Database, Clustering and Interaction Tensor Calculation
SCA reduces the degrees of freedom to be solved by taking advantage of the mechanical response similarity of material points in a MVE. This similarity is found by clustering the field data of some mechanical response. Generally, deformation concentration tensor can be used. It is defined by
where F0 is the macroscopic deformation corresponding to the boundary conditions of the MVE, F(X) is the local deformation at point X in the MVE domain Ω. In two dimensions, A(X) has (2×2)2=16 independent components, requiring direct numerical simulations (DNS) under four orthogonal loading conditions to uniquely define. In three dimensions, A(X) has (3×3)2=81 independent components, requiring DNS under nine orthogonal loading conditions to uniquely define. However, for specific problems where the loading condition is known a priori, the deformation gradient (or strain) field of the same loading condition can be used for clustering. Once the clustering data is prepared, clustering methods, such as k-means clustering or self-organizing maps, can be used to find a predefined number of clusters.
The interaction tensor has to be recalculated every time C0 is updated during the self-consistent scheme. However, most of the calculation effort can be done in the offline stage if C0 is isotropic. An isotropic reference stiffness tensor can be expressed as
Cklmn0=λ0δklδmn+2μ0δkmδln, (8-11)
where μ0 and λ0 are reference Lamé constants. The corresponding Green's operator Γ0 can be decomposed into two parts:
Γ0=c1(μ0,λ0)Γ1+c2(μ0,λ0)Γ2. (8-12)
Here, c1 and c2 depend on λ0 and μ0:
The terms Γ1 and Γ2 have simple forms in the Fourier frequency space:
where ξ is a Fourier frequency point and |ξ|=√{square root over (ξiξi)}. Thus, the interaction tensor can be expressed as
DIJ=c1D1IJ+c2D2IJ. (8-15)
where
Notice in Eqs. (8-14) and (8-16) that DwIJ, w=1,2 do not depend on the two parameters λ0 and μ0, thus need only be calculated once, which is done in the offline stage. If the MVE can be represented by a regular grid (i.e., voxels), the convolution in Eq. (8-16) can be obtained with relatively little computational effort using an FFT algorithm:
Γw*χJ=−1({circumflex over (Γ)}w(χJ)), (8-17)
where is the FFT operation and −1 is its inverse.
Procedure for Polycrystalline Microstructure-Property Database Generation
In certain embodiments, the implementation procedure for generating a polycrystalline microstructure-property database includes the following steps.
-
- 1. If not using images, set microstructure descriptors; else
- (a) load 3D images;
- (b) measure microstructure descriptors;
- 2. Run Dream.3D pipelines to generate M MVEs;
- 3. Initialize m=1;
- 4. Set number of clusters per grain k for MVE m;
- 5. For MVE m, run CPSCA offline calculations:
- (a) if k=1, go to (d); else continue;
- (b) elastic strain concentration calculation using FFT;
- (c) domain decomposition using k-means clustering;
- (d) interaction tensor calculation using FFT;
- 6. Set loading conditions;
- 7. Run CPSCA online subroutine;
- 8. Evaluate and add effective properties to database, and m←m+; and
- 9. Repeat steps 4-8 until m=M.
- 1. If not using images, set microstructure descriptors; else
Online Stage: Self-Consistent Scheme
For large deformation, the far field deformation gradient is applied incrementally. The incremental far field deformation gradient ΔF0 of the current loading step is defined by Fcurrent0−Flast0. Then an incremental form of Eq. (8) is given by
ΔFI+ΣJ=1N
where ΔFJ and ΔPJ are the local incremental deformation gradient and PK1 stress. Since ΔPJ can be determined as a function of ΔFJ through a local constitutive law in the Jth cluster, the unknows for Eq. (8-18) are {ΔF}={ΔF1, ΔF2, . . . , ΔFN
rI=ΔFI+ΣJ=1N
The system Jacobian {M} is defined component-wise as:
where
is the tangent stillness tensor of the material in the Jth cluster and is an output of the local constitutive law in the cluster for the current loading increment. I4 is a 4th rank identity tensor defined by I4,klmn=δkmδln, and δIJ is the Kronecker delta in terms of indices I and J. The Newton's method update for the incremental deformation gradient can then be expressed as
{δF}=−{M}−1{r}. (8-21)
As mentioned, a self-consistent iterative scheme is also necessary to update the isotropic C0 at each loading increment so that it approximates the effective tangent stiffness Ceff of the MVE at that loading step. The general Ceff can be obtained by noting that
where ΔP0 is the incremental far field PK1 stress of the current loading step given by ΔP0=ΣI=1N
is the deformation gradient concentration tensor of the Ith cluster for the current loading increment. For each increment, this is computed by noting that the differential form of Eq. (18) is d{ΔF0}={M}d{ΔF}, which gives d{ΔF}={M}−1d{ΔF0}, where {ΔF0}={ΔF0; . . . ; ΔF0} has Nc blocks of ΔF0. Denote BIJ as the IJ component of the inverse of the Jacobian system: {B}={M}−1, then AI=ΣJ=1N
The isotropic C0 is obtained by minimizing ∥ΔP0−C0:ΔF0∥2, where ∥Z∥2=Z:Z for an arbitrary second-order tensor Z. The drawback of this method is that the optimization problem is under-determined under pure shear or hydrostatic loading conditions. In this exemplary example, the approximation is done by projecting the Ceff of the MVE to a 4th rank isotropic tensor. This is done by first expressing the isotropic C0 as
C0=(3λ0+2μ0)J+2μ0K, (8-23)
where the forth-rank tensors J and K are defined as
J=⅓I2⊗I2 and K=I4−J. (8-24)
Then by noting that
J::K=0,J::J=1, and K::K=8, (8-25)
the two Lamé constants are obtained as
where :: defines the double contraction operation.
Algorithm for the Online Stage of SCA
In certain embodiments, the algorithm for the online stage of SCA includes the following steps.
-
- 1. Initial conditions and initialization:
- (a) set n=0, {F}n={I2}, {P}n=0, {ΔF}n=0 and {ΔF}new={ΔF}n;
- (b) call a UMAT subroutine to get CJ, J=1, . . . , Nc, and set the reference stiffness Cn0=ΣJ=1N
c vJCJ;
- 2. For load increment n+1, update the interaction tensor parts {D1} and {D2};
- 3. Newton iterations:
- (a) call the UMAT subroutine to get {ΔP}new and {C};
- (b) compute the residual {r};
- (c) compute the system Jacobian {M}=∂{r}/∂{ΔF};
- (d) solve the linear equation {M}{δF}=−{r} for {δF};
- (e) {ΔF}new←{ΔF}new+{γF};
- (f) if maxJ=1N
c {∥δFJ∥}<tolnewton is not met, go to 3(a);
- 4. Calculate effective tangent stiffness Ceff and project it to a 4th rank isotropic tensor to obtain Cn+10;
- 5. If ∥Cn+10−Cn0∥<tolsc is not met, go to step 2;
- 6. {F}n+1←{F}n+{ΔF}new, {P}n←{P}n+{ΔP}new, n←n+1 and update the state variables for the UMAT subroutine;
- 7. Repeat steps 2-6 until the simulation completes.
- 1. Initial conditions and initialization:
By formulating the MVE problem at finite strains, we are now able to correctly consider general elasto-viscoplastic constitutive laws of the type that rely on the multiplicative decomposition of the deformation gradient. One common application for such a law, commonly known as crystal plasticity, which describes the mechanical behavior of a single crystal, is given. The macroscopic response of a polycrystal aggregate predicted by SCA with such a material law is validated by comparing with two full-field methods. It is shown that SCA achieves comparable results with significantly reduced DoFs and computational expense.
Elasto-Viscoplastic Material Model
In the general elasto-viscoplastic material model, the local deformation gradient F is multiplicatively decomposed into elastic Fe and inelastic Fin contributions:
F=Fe·Fin. (8-27)
Fe is a combination of the elastic stretch and rigid body rotation, while Fin is associated with unrecoverable deformation mechanisms, such as dislocation slip and/or transformation plasticity. The elastic constitutive law is given by
Se=CSE:Ee=½CSE:[(Fe)T·Fe−I2], (8-28)
where Ee is the elastic Green-Lagrange strain, Se is the Second Piola-Kirchhoff stress, CSE is the 4th order elastic stiffness tensor, and I2 is the 2nd order identity. Taking the inelastic term as solely the plastic deformation (Fin=Fp), the inelastic deformation gradient Fin can be determined using a plastic constitutive law to relate the plastic velocity gradient Lp={dot over (F)}p·(Fp)−1 to plastic shear rate {dot over (γ)}α in slip system a through
Lp=Σα=1N
Here, s0α and n0α are unit vectors which define the slip direction and slip plane normal for slip system α in the undeformed configuration, Nslip is the number of active slip systems, and ⊗ is the dyadic product. In general, the plastic shear rate {dot over (γ)}α in slip system a is taken to be a function of resolved shear stress τα, deformation resistance τ0α, and back stress aα in that slip system. The resolved shear stress is given by
τ(α)=σ:(sα⊗nα), (8-30)
where σ is the Cauchy stress, s(α) is the slip direction, and n(α) is the slip plane normal, all of which are defined in the deformed configuration. They are computed from their counterparts in the undeformed configuration with
It is possible to use any appropriate evolution law for {dot over (γ)}α. In this work we choose to employ a power law for {dot over (γ)}α given by
where {dot over (γ)}0 is a reference shear rate, and {tilde over (m)} is the exponent related to material strain rate sensitivity. The evolution laws for deformation resistance τ0α (the isotropic hardening term) and back stress aα (the kinematic hardening term) are given:
where H and h are direct hardening coefficients, and R and r are dynamic recovery coefficients. Note that in Eq. (8-33) we assume the latent hardening and self-hardening effects are identical. To account for the grain size effect on apparent properties, a Hall-Petch-type equation is introduced that relates the initial slip system deformation resistance in a grain, τ0α,t=0, to the equivalent sphere diameter (ESD) D of that grain with
where τ0,inα,t=0 denotes the intrinsic initial slip resistance, and Kα is the grain size strengthening coefficient. This equation approximates the impeding effect of grain boundaries on dislocation slip.
Given a deformation gradient increment ΔF, its corresponding increment of PK1 stress can be calculated following the numerical algorithm given. 9 also provides the tangent stiffness
used to couple this material law with SCA.
Tensile Behavior of a Grain Aggregate
The material considered in this work is a fully transformed α-phase Titanium alloy, containing 24 active hexagonal close packed (HCP) slip systems: 3 <11
The microstructure considered is an idealized MVE of eight equal-sized 40 μm×40 μm×40 μm cubic grains, as shown in panel (a) of
To show the accuracy of CPSCA, the converged macroscale stress-strain curve from full-field methods is used as the reference solution. The converged solution is obtained by refining the mesh to 60×60×60, which results in 648,000 DoFs for CPFEM and 1,944,000 for CPFFT. Panels (a)-(b) of
CPSCA achieves high efficiency by solving only a few DoFs, without sacrificing much accuracy. To demonstrate this, the 0.2% plastic strain offset (σ0.2, near the yield point) and 0.4% offsets stresses (σ0.4, away from the yield point) are compared to the reference solution for different number of clusters. This difference is plotted in panel (a) of
One application of microstructure modeling is to evaluate the effective properties of a bulk material. For example, under monotonic tensile loading, a virtual tension test predicts the overall stress-strain curve directly through homogenization. From this, common scalar material properties such as effective elastic stiffness and yield strength can be extracted. Under cyclic loading, local stress and strain information can be used to evaluate fatigue life. Due to restrictions on the MVE size because of computational expense and microstructure randomness, uncertainty quantification of the predicted effective properties is needed. To achieve this, multiple realizations of the MVE homogenization are computed using the SCA method introduced above. A flowchart for generating a MVE-property database is shown in
Synthetic Microstructure Volume Elements
Although MVEs can be obtained by extracting volumetric information from 3D experimental images or grain growth simulations, as mentioned in the introduction, such an approach could be very expensive because many MVEs are needed to generate a microstructure-property database. More practically, a statistical distribution of microstructure feature parameters can be measured from routine, 2D experimental characterization and used to synthetically construct MVEs. To construct polycrystalline MVEs, we use the software package DREAM.3D, which includes tools to generate microstructures that adhere as nearly as possible to predetermined statistics of descriptors. This work focuses on varying two microstructure descriptors of polycrystalline materials: grain orientation distribution function and grain size distribution function.
The crystallographic orientation distribution function (ODF), also known as texture, defines the probability density ƒ(Q) of crystallites falling into an infinitesimal neighborhood around the orientation Q, which is often parameterized by Euler angles. In this work, we use the Bunge convention of Euler angles (ϕ1, Φ, ϕ2), which define subsequent rotations about the z-axis, then the new x-axis, and then the new z-axis again. In DREAM.3D, the orientation space is discretized into bins and an ODF with strong texture can be generated by specifying some orientations, the corresponding weights defined as multiples of random distribution (MRD) and number of bins it takes for the MRD value to reduce to zero decreasing quadratically from the bin of the entered orientation.
The grain size distribution function is assumed to be log-normal with the probability density function given by
where D is the equivalent sphere diameter (ESD) of a grain, σ a scale parameter, and μ is the shape parameter. Other descriptors such as misorientation distribution function, aspect ratio, and number of neighbors are also necessary to synthesize a complete microstructure realistically
Effect of Texture on Effective Properties
To show the effect of texture on yield strength, four texture cases were considered: no texture, (0,0,0) preferred, (90,45,0) preferred, and (90,90,0) preferred. Fifty cubic MVEs were synthetically generated using DREAM.3D with grain size distribution parameters μ=19.7 μm and σ=2.7 μm for each of the four texture cases. Each MVE has around 90 equiaxed grains represented in a 81×81×81 voxel mesh with a resolution of 1 μm×1 μm×1 μm per voxel. Example MVEs for these texture cases are given in
Effect of Grain Size on Effective Properties
The effect of grain size on yield strength is studied by setting σ in Eq. (8-35) such that four different average grain sizes are generated between 10 μm and 40 μm. Fifty MVEs of equiaxed grains without texture are generated using DREAM.3D for each σ value while keeping all other parameters the same. Each MVE is represented in a 81×81×81 voxel mesh. Sample MVEs with different grain size are given in
Another type of application for high-efficiency, microstructure-based modeling is the simulation of finite-deformation processes, in which microstructure evolves extensively and gives rise to complex macroscopic behaviors. For such applications, it is often hard to find a simple-form phenomenological constitutive model at the macroscale. Moreover, a phenomenological constitutive model has to be recalibrated for new materials in which microstructures are different. In this case study, it will be shown that the speed of the CPSCA-based microstructure modeling makes possible a concurrent multiscale model for applications where finite strains at the microscale are crucial. Our exemplar is the metal rolling process. Texture evolution is important at the microscale, and it depends on the macroscale loading and distribution of stress throughout the part. To capture all this, a multiscale method is required.
A schematic of the rolling process of a thick plate is shown in panel (a) of
For the macroscale problem, the implicit time integration method was employed so that larger time increments can be used. Coulomb friction was assumed between the roller and the plate, with a friction coefficient of 0.3 for the plate being pulled through the roll stand. The thick plate was given an initial velocity of 1.07 m/second to reduce impact between the plate and the roller which might cause numerical difficulty. By taking advantage of symmetry, only 1/4 plate and a single roller were modeled. The 1/4 plate was meshed with 2994 uniform hexahedral elements and the roller was modeled with fine enough rigid elements. Reduced integration was used with stiffness-based hour-glassing control. For the microscale problem, the MVE was chosen to include 90 randomly orientated equiaxed grains (panel (b) of
The simulation was stopped at rolling time 0.1 seconds. Panels (a)-(b) of
Another advantage of the concurrent multiscale simulation is that microstructure evolution is solved for the whole manufacturing process.
By using the Abaqus MPI parallelization with 72 cores (on three nodes each with two Intel Haswell E5-2680v32.5 GHz 12-core processors), the concurrent simulation takes approximately 112 h.
In sum, a finite-strain self-consistent clustering analysis is developed and applied to model interacting and evolving microstructure for general elasto-viscoplastic materials. The method is reformulated in an initial Lagrange configuration so that large deformation problems can be solved. The accuracy and efficiency obtained for the prediction of overall mechanical response of polycrystalline materials is demonstrated by comparing with both the finite element analysis and the fast Fourier transform-based method. It is shown that CPSCA achieves high accuracy with significantly reduced degrees of freedom.
In our case studies, grains are resolved explicitly in voxel-based MVEs reconstructed with predefined microstructure descriptors. These MVEs are used to predict with quantitative uncertainty the influence of texture on yield strength, and grain size on yield strength. Finally, a concurrent multiscale simulation of a rolling process shows the heterogeneous microstructure evolution throughout the rolled part. This is made possible by the efficiency of this method. Potential applications include simulation-driven microstructural design and manufacturing process control.
Example 9 Modeling and Characterization of Integrated Computational Materials Engineering (ICME) Composites 1. Atomistically Informed Resin Infusion ModelIn this work, predictive atomistic models of epoxy resins are developed, and the thermomechanical properties and their dependence on the molecular chemistries of the resin matrix are characterized, including resin functionality, crosslink degree, and component ratio, demonstrating the viability of utilizing atomistic simulation to predict key material properties and trends. In addition, we also presented a hierarchical multiscale model where MD simulation results were homogenized to a thermo-plastic law to describe the constitutive behavior of epoxy resins. This thermo-plastic law has been used in RVE modeling to predict the stiffness and strength of CFRP composites. Furthermore, we characterized the properties of the nanoscale interphase between carbon fibers and resin matrix and integrated the interphase into the mechanistic continuum models for CFRP, and elucidated the explicit effect of the interphase region on the failure behavior of the composites, which generated insights to guide future design strategies for failure-resistant composites.
The superior thermomechanical properties of epoxy resins have led to a wide range of applications, most notably as matrix materials in fiber-reinforced composites. The excellent thermomechanical properties arise from the highly crosslinked molecular structure the resins could form. Nanoscale simulations of epoxy resins offer a promising way to characterize their properties and the dependence on molecular-level factors, such as resin type and crosslink density. Furthermore, a deep understanding of the dependence of thermomechanical properties on the molecular-level structures is of critical importance to guide future computation-based design for epoxy resins with optimized mechanical properties.
Atomistic molecular dynamics (MD) simulations on epoxy resins have been successfully applied to predict various material properties. Several computational algorithms have been developed to generate reasonable crosslinked structures for investigation of their physical properties. MD simulations have been carried out to predict the glass transition temperature (Tg) and provided valuable insights into the effects of strain rate, temperature, and crosslink degree on Young's modulus and yielding behavior. Despite significant progress toward understanding epoxy thermomechanical response, multiscale models that can bridge length and time scales, especially couple atomistic and continuum scales, remains a particular challenge.
To overcome this challenge, we first developed nanoscale models of representative epoxy resins by capturing the specific crosslinked structures. We then characterized elastic, yield, and post-yield behavior from MD simulations. After that, yield surfaces were generated from MD simulation results, which can be well described by a paraboloid yield criterion. Further, by adding plastic potential and hardening law, a thermo-plastic law was proposed to describe the constitutive behavior of epoxy resins. Along the way we also illustrated the dependence of thermomechanical properties of epoxy resins on molecular chemistry, such as epoxy type, component ratio, and crosslink density.
In addition, the interphase region that exists between fibers and resin matrices possesses heterogeneous chemical and physical features and has a thickness at the sub-micron scale. Despite being much smaller than the fiber diameter, the interphase region has been shown to play a critical role in the performance of CFRP composites. Accurate modeling or characterization for the interphase region remains a significant challenge. To overcome the hurdles encountered in nanoscopic experiments, efforts have been reported to characterize the interphase region using analytical models or MD analyses. However, there have been few studies that integrate the nanoscale interphase region with RVE modeling and study the effect of the interphase region on the macroscopic composite response. To address this issue, we first obtained the properties of the interphase region according to MD simulation results and a generic analytical gradient model. Then, the average property of the interphase region was incorporated into a modified RVE model, in which the three phases, fiber, matrix, and the interphase, were included. This modified RVE model was shown to improve significantly in predictions of the modulus and failure strength of the composites.
Generating the Realistic Crosslinked Structure of Epoxy Resins and Yield Surface Calculation
We chose two representative epoxy systems as our model system: (1) an epoxy resin commercially known as Epon 825, including diglycidyl ether of Bisphenol A (DGEBA) with curing agent 3,3-diaminodiphenyl sulfone (33DDS); and (2) an epoxy commercially denominated as 3501-6, mainly composed by tetraglycidyl methylenedianiline (TGMDA) with curing agent 4,4-diaminodiphenyl sulfone (44DDS). We integrated the Polymatic Algorithm with the LAMMPS package to simulate the crosslinking process. Basically, covalent crosslink bonds were added between eligible atoms based on pair-wise separation distance. Also, for every several crosslink bonds formed, energy minimization and equilibration simulations were conducted with MD to alleviate the stress generated. This workflow was able to generate atomistic structures of epoxy resins with different crosslink degrees from different initial chemistries and component ratios.
To obtain the yield surface of typical epoxy resins, the stress-strain responses of the Epon 825 model system were first calculated from the MD simulations at different temperatures and at a strain rate of 5×108 s−1. We note that the high strain rate is inherent in MD simulations given the small time-step used. During these simulations, proper thermostatting is applied to maintain the systems at specified temperatures. The results for uniaxial tensile and compressive loading cases are plotted in
The subsequently obtained yield surfaces for the model system at different temperatures is shown in
ƒ(σ,σγ
where J2 is the second invariant of the deviatoric stress tensor, and I1 is the first invariant of the stress tensor. σT and σc denote the tensile and compressive yielding stress, respectively.
Due to the high strain rates at which MD simulations are performed, the yield stresses obtained are higher than the values obtained experimentally. Nevertheless, we further find that the experimental results on yield stresses can be well described by the same criterion as shown in Eq. (9-1).
Similar to other plasticity formulations, the thermo-plastic law disclosed herein is then defined by the yield surface, plastic potential, and hardening law as outlined next.
First, a plastic potential with a non-associative flow rule such that a positive volumetric plastic strain is prevented under hydrostatic pressure is defined as:
g=σvm2+αp2 (9-2)
where σvm=√{square root over (3J2)} is the von Mises equivalent stress, P=1/3 I1 is the hydrostatic pressure, and α is the material parameter to correct the volumetric component of the plastic flow, which equals to
with vp being the plastic Poisson's ratio. This thermo-plastic law of the resin matrix has been integrated into the mechanistic continuum models for CFRP with basic parameters informed by experimental results. By using this law, the characterized yield surfaces for the epoxy resins agree very well with experimental results, with the error less than 5%, thus achieving the goal and objective of the project.
The framework for developing crosslinked epoxy resin structures as well as yield surface characterizations is generally applicable to other epoxy resin systems with different chemistries.
Dependence of Thermomechanical Properties on Molecular Chemistry
We have also studied the large-deformation behavior of epoxy resins and characterized their failure response at the atomistic level. During large deformation, there are inevitable bond breaking events happening in the network structures of epoxy resins. To capture the realistic bond breaking phenomena, we adopted a reactive force field, which has been validated to preserve the elastic and plastic responses of the epoxy resins studied here. Stress-strain curves of 3501-6 epoxy systems with different crosslink degrees and component ratios are plotted in
Building upon the stress-strain curves from tensile simulations and the parameters quantifying the structural changes such as chain reorientation and void formation, we have linked this atomistic level failure response of resins to their macroscopic fracture properties on the basis of a continuum fracture mechanics model. This work provided physical insights into the molecular mechanisms that govern the fracture characteristics of epoxy resins and demonstrated the success of utilizing atomistic simulations toward predicting macroscopic fracture energies.
We would like to note that the planned methodology to investigate the large-deformation and failure behavior of epoxy resins was to develop coarse-grained models for epoxy resins, which could increase the computational efficiency. We departed from the planned methodology by using atomistic simulations with specific chemistry details captured. Although being more computationally expensive, atomistic simulations gave us direct predictions of the thermomechanical properties while avoiding the need to calibrate force fields for coarse-grained models. Additionally, reactive force field provides more accurate predictions for the stress-induced bond breaking events and failure responses of the resins at the nanoscale. More importantly, the multiscale methodology by informing yield surface criterion and thermo-plastic constitutive laws is more powerful to bridge length and time scales than coarse-grained models.
Interphase Property Characterization
Due to the surface roughness of carbon fibers, the surface treatments during fiber manufacturing process, and matrix affected regions, there exists a submicron-thick interphase region around carbon fibers. The thickness of the interphase region has been evaluated to be about 200 nm with an analysis from transmission electron microscopy (TEM). Here, the interphase region is further simplified as a cylindrical shell adjacent to the fiber, with the inner radius rf being the same as the fiber radius and outer radius ri=rf+200 nm, as shown in
To characterize the average properties of the interphase region, we adopted an analytical gradient model to describe the modulus and strength profile inside the interphase. Also, we integrated the MD simulation results on the insufficient crosslinked resins to capture Ems and σms. The gradient model proposed here include two parts. In the first part, Young's modulus and strength decrease from the fiber values to the lowest values, i.e., Ems and σms. In the second part, the values gradually increase from the lowest to the values of the bulk matrix. The decreasing trend in the first part is due to the attenuation of the fiber confinement effect, and the increasing trend in the second part is because of the intrinsic epoxy resin stiffening through sufficient crosslinking. We used the properties of fiber and matrix to formulate the boundary conditions of the interphase region. The position of the lowest values (ris) was assumed to be at three quarters (0.75) of the interphase away from the fiber surface. The position was chosen near the matrix side, since the incompatibility between sizing and bulk matrix resin mainly induces the insufficient crosslinking. A sensitivity analysis has also been conducted to verify that the assumed position of the insufficient crosslink region has a low influence on the average properties of the interphase region.
The variations of the properties of the interphase region were assumed to follow the exponential function as follows:
where K can be either E (Young's modulus) or σ (strength), and the functions R(r) and Q(r) are constructed to match the boundary conditions:
The average Young's modulus and strength of the interphase can be obtained as:
Substituting the parameters of both Young's modulus and strength values into the above equation, we finally obtained the average Young's modulus and strength of the interphase. Compared with matrix modulus and tensile strength, the average Young's modulus and strength of the interphase region are increased by around 5 and 9 times, respectively. The interphase region shows an obviously stiffened response compared to the bulk matrix, although a portion of the interphase region is weaker due to insufficient crosslinking. The constitutive behavior and damage model of the interphase were assumed to be similar to those of the bulk matrix. By integrating this stiffened interphase region into RVE model of the UD composites, the accuracy of RVE was much enhanced compared to the traditional two-phase model without the interphase region. The importance of this interphase region is thus clearly manifested.
Our work on atomistic modeling of epoxy resins would provide guidance for future epoxy resin computation-based design. As an important component in integrated computational materials engineering (ICME), atomistic molecular dynamics simulation would further empower the material-by-design process by commercially implementing the technology.
First, nanoscale simulations of highly crosslinked epoxy resins offer a promising way for the development of new continuum theories and models. Fully atomistic models are especially appealing because they are based on fundamental input information—force fields and chemical structure—avoiding the need to calibrate phenomenological laws.
Second, hierarchical multiscale methods which are based on sequential homogenization of smaller scales to larger scales can effectively transfer the information from atomistic or nanoscale to the macroscopic continuum level. In this work, yield surface criterion has been informed from atomistic simulations and integrated into macroscopic models.
Third, the dependence of thermomechanical properties on molecular chemistries revealed here shows promise in accelerating the material-by-design process for thermosets by incorporating data from molecular models. Potential next steps could be leveraging molecular simulations to guide the design of the epoxy chemistry or component ratios in order to optimize the strength and toughness of the thermoset resins.
Last, we have demonstrated that by utilizing molecular simulations and analytical models, we are able to represent the distinct interphase region properties between the fiber and matrix. Subsequently, we have elucidated the explicit effect of this interphase region on the failure behavior of composites. Building upon this, potential future work could involve computational-based design strategies for failure-resistant composites, such as specific nano-engineered architectures and chemistries inside the interphase region.
2. Preform MoldingThe preform research uses experimental and computational methods to help understand the material mechanical behavior during the preforming process. Then based on the observation of the material behavior, preforming simulation models with high fidelity are developed. These models start from the macroscopic part-level, progress to the mesoscopic composite level, and finally form a multiscale simulation strategy. The multiscale strategy enables users to have a full understanding of the process parameters optimization, and the lower level composite material design. To validate the simulation models, preforming benchmark tests with shear angle and forming force measurement technique are developed and performed. This benchmark tests, with various combination of process parameters, provide insightful guidance to the preforming process design.
The traditional trial-and-error method to develop a manufacturing process for carbon fiber composites, which relies heavily on experiments, requires great raw material consumption and a long development period. To solve this cost issue, the developed experimental and computational methods for the preforming process form a whole system, which utilizes computation power and virtual manufacturing tools to aid the design and optimization of the carbon fiber composites preforming process.
The experimental research reveals the behaviors of composite prepregs in the manufacturing process, especially the difference from the conventional metal forming process, such as temperature control, fiber reorientation, surface interaction, etc. These behaviors illustrate the importance to adjust the manufacturing technique to the needs of advanced composites. The computational modeling research, on the other hand, completes a whole software package that enables researchers from either academia or industry to virtually design and optimize carbon fiber prepreg preforming, which helps to lower the cost for carbon fiber composite manufacturing development, and broaden the application of this advanced composite material.
To automatically manufacture CFRP parts in large quantities for transportation equipment, thermoforming is a proper choice as it can provide a high production rate with relatively complicated surface geometries, good product quality, and low facility cost. In the thermoforming process, the first step is to stack layers of thermoset carbon fabric impregnated by uncured thermoset resin (prepreg) in an optimized fiber orientation combination. Then, these laminates are heated to soften the resin and subsequently formed to desired 3D shapes on a press machine during the preforming step. Finally, the parts are cured to achieve designed part shapes. In thermoforming, most of the fiber re-orientation is introduced in the preforming step that replaces the conventional high-cost and low-rate hand laying work. Since mechanical stiffness and strength of the composites are mostly affected by the fiber direction, the selection of the preforming parameters such as process temperature and initial fiber orientation is important to the final part performance including shear and kink bands development in the weave under various loading conditions.
To optimize the preforming parameters and produce defect-free parts, numerous tests with different parameter combinations are commonly conducted. However, the consumption of raw material and the long development period increase the cost and time of production; hampering the practicality of thermoforming. To address this issue, several computational models have been developed to simulate the preforming process to predict the fiber orientation, geometry, wrinkling behavior on parts, and forming force. The first widely used computational method to predict the woven CFRP behavior during the preforming process is the pure kinematic-based pin-joint net (PJN) assumption. However, the ignorance of the mechanical properties of the fabric and the resin, results in inaccurate prediction, especially for wrinkling prediction. As an alternative, the finite element method (FEM) draws increasing attention. Simulations for the fiber orientation, draw-in amount and wrinkling behavior prediction during the preforming process have been documented in literatures. Jauffre et al. combined 1D beam elements and 2D shell elements to simulate the tensile and shear behaviors of the material separately. The meshing process for this hybrid element, however, was tricky and time-consuming. Hamila et al. developed a semi-discrete triangle shell element and handled this problem based on internal virtual work. The drawback is that this element was applicable in an in-house FEM software, limiting its usage in the industry. In the LS-DYNA® software, there are built-in woven fabric material models, such as the MAT_234 and MAT 235. Both models, however, are based on mesoscale mechanics and require the input of mesoscale material parameters such as the yarn moduli and yarn-yarn interaction coefficient. It was found in practice that for these parameters, direct experimental characterization is difficult and reverse calculation is time-consuming.
For the potential of commercialization and user-friendly operation, a non-orthogonal material model for the CFRP preforming simulation was developed by Northwestern University and was implemented into a commercial FEM code ABAQUS® as a user-defined material subroutine. Although the intention of coupling the tensile and shear behavior in the new constitutive law was applaudable for having the most general form, it encountered inaccuracy especially when woven CFRP is subject to large shear deformation. As an advancement, an improved non-orthogonal model for the woven CFRP preforming process is invented in the invention. It has been validated by benchmark tests and has been incorporated into the LS-DYNA® software as MAT_COMPRF (MAT_293) through the joint effort of this academic and industry team.
For reliable predictions, there is a need for characterizing and employing realistic and accurate material properties in the computational model. During preforming, prepregs will undergo tension, shear, and bending deformation, as demonstrated in
The most widely adopted methods for characterizing the properties within one prepreg layer are: 1) the uniaxial tension test to determine the tensile modulus of the composites and 2) the bias-extension test to measure the shear modulus of the woven composites. The reliability and repeatability of these two tests have been validated by using different apparatuses to study a set of similar materials. The bending stiffness of the material is also needed for proper simulation of the material behavior during the forming process. However, the softness of the prepreg under the preforming temperature makes it difficult to measure the bending stiffness via the standard 3-point bending test. An alternative cantilever beam method, which utilizes self-gravity to bend specimens, has been developed and applied to measure the bending stiffness of prepregs. For the prepreg surface interaction characterization, however, a systematic study was still lacking before this integrated computational materials engineering (ICME) project, although this interaction surely affects the fiber orientation after preforming, and plays a significant role in the final product performance. To fill this gap, we designed and built an innovative test apparatus in the invention. Based on our observation of this surface interaction mechanism, a hydro-lubricant interaction model is also constructed to analyze and predict the interaction between prepreg layers. This numerical model together with experiments, reveals the details of the fiber interaction, such as its strength and periodic pattern.
Experimental prepreg characterization can provide reliable results, but it also has some drawbacks. The major disadvantage is that it can only achieve limited loading states. For example, the uniaxial tension test can only introduce pure tension deformation while the bias-extension test can only introduce pure shear deformation. Hence, the coupling between tension and shear cannot be physically characterized and subsequently implemented into the numerical model. Although in most cases this neglection would not affect the prediction of geometry and fiber orientation significantly due to the fact that the shear modulus of the uncured prepreg is always several orders of magnitude smaller than its tensile modulus, it will introduce errors in the prediction of preforming stress and punch force and hence, reduce the analysis accuracy of defects, such as breakage, pull-out, and separation of the fiber yarns.
Several new test devices such as the biaxial tension apparatus and the picture frame apparatus with tension adjustment have been designed to address the above issue. In practice, however, even these complex devices cannot cover all of the strain states that the prepreg will undergo during preforming due to the complexity of three-dimensional geometry and the resulting nonlinear loading paths. Additionally, these experimental characterization methods are at the macroscale and hence do not provide insightful information on how the mesoscale composite structure and constituents affect mechanical properties of the materials. The cost of raw materials and test time also need to be considered in planning experiments. To address this issue, in the invention, we developed a new multiscale preforming simulation method based on the prepreg characterization by the mesoscopic representative volume element (RVE) to account for the tension-shear coupling and applied it to the preforming simulation of a 2×2 twill thermoset prepreg.
Challenges to building a closely packed woven RVE model include structure generation and mesoscopic yarn properties characterization. To construct RVEs with accurate woven patterns and yarn geometrical features, several different approaches have been developed. One approach is to directly use CAD software to design and output the RVE structure. This approach, while being straightforward and suiTable 9—for a specific composite structure, is time-consuming because, for each specific composite, the structure needs to be drawn individually, and when the yarn surface geometries become complex, the yarn cross-section shape needs to be manually identified. To generalize design process and accommodate for more composite structures, geometrical modeling software packages such as TexGen and WiseTex are developed. These packages store large libraries for different composite patterns and can generate the corresponding RVE structure given the geometrical features such as RVE length, yarn width, yarn height, and so on. However, the automatically generated structures usually have a fixed shape of the yarn cross-sections and yarn centerlines. These simplifications are suiTable 9—for loose woven materials but result in yarn-to-yarn penetration in close-packed composites. In this case, fine-tuning the geometry by modifying the position, dimension, or utilizing non-symmetrical shapes of the local yarn cross-section is essential. These procedures, however, involve complicated geometry manipulation and are time-consuming. For capturing more accurate and detailed structures in RVE, researchers have recently employed X-ray micro-tomography to directly obtain the geometry of the composites. This is a promising technique but is quite expensive and requires careful image processing. As an alternative, to achieve a fine balance between speed and accuracy in generating the RVE structure, we developed a novel 2-step geometrical modeling method in the invention with a one-time post-processing step to modify the local yarn geometry generated by TexGen.
As for the challenge to obtain unknown yarn properties at the mesoscale, a Bayesian model calibration and validation approach is developed for integrating the calibrated mesoscale stress emulator with macroscale part performance simulations. This is the first case in which Bayesian calibration and hierarchical multiscale techniques are utilized for simulation of uncured prepreg preforming.
The major technical target for the preforming modeling part of the invention is to develop a computational simulation method that can capture the deformation of carbon fiber composite prepregs, including part geometry, fiber orientation, and forming force during preforming, with high accuracy and less than 5% error. With this simulation method, the material cost and development period for design and optimization of carbon fiber composite prepreg preforming can be reduced significantly compared to the conventional manufacturing process design methods, which rely heavily on trial-and error experiments. This cost and time cut for the development of a carbon fiber composite manufacturing process will broaden the application of these advanced light-weight composites in the transportation industry, making a great contribution to the control of fossil fuel consumption and emission pollution.
For the success of development, experimental material characterization techniques are to be designed and performed systematically, first to provide correct input to simulation models. Then models at both mesoscale and macroscale are to be established and validated. The goal of mesoscopic modeling is to perform virtual material characterization to replace the unsatisfactory direct experimental characterization. The goal of macroscopic modeling, on the other hand, is to form a platform for part-scale preforming simulation when measured material properties are input. Finally, these modeling tools are to be combined with proper calibration techniques to form a high accuracy and high fidelity hierarchical multiscale modeling method for the prepreg preforming process.
Uncured Prepreg Characterization Experimental Methods
To obtain the well-defined constitutive model and the simulation schemes for the numerical calculation, characterization of the uncured prepreg is necessary. Several experimental methods to characterize the material have been developed in the invention: the uniaxial tension tests are used to characterize the tensile modulus along the fiber yarns; the bias-extension tests are used to characterize the shear modulus of the prepreg; the bending tests are employed to characterize the bending stiffness along the yarns; and the surface interaction tests are used to characterize the prepreg-prepreg and prepreg-tool interaction during the preforming process.
Uniaxial tension tests: Uniaxial tension tests are used to obtain the tensile modulus along the fiber yarns during the preforming process. The experiment setting is shown in
In the uniaxial tension tests of the prepreg, the tensile modulus was determined from the fabric tensile test, not a yarn test, so that the tensile specimen could include as many unit cells as possible. Tests at the various temperatures that will be encountered during preforming were conducted. To avoid slippage between the specimen and the clamps caused by the viscous epoxy, the two ends of the specimen were cured before the tests to harden the material and ensure the clamping force during the tests.
Engineering stress and strain are used to normalize the load and displacement data. As an example, the curves at various temperatures are demonstrated in
Bias-extension tests: The bias-extension test determines the in-plane shear stiffness properties of the woven carbon fiber prepreg. The experiment setting is the same as that for the uniaxial tension test as shown in
The temperature of the experiment was set at the various temperatures that will be encountered in preforming, which covers the range of the general preforming process temperature measured by the IR camera during the single dome preforming process performed in our lab. Various temperatures and tensile rates are included in the tests to investigate temperature and shear rate effects on the shear stress.
Some normalization methods are necessary for convenient utilization of the load-displacement data from the bias-extension tests and compensation for the size difference of the specimen. The engineering stress was used to normalize the load. As for the displacement, because the deformation is not uniform throughout the specimen, the normalization method was derived based on the assumption that the yarns are inextensible, and no slip occurs in the sample during the bias-extension tests. In order to validate this displacement normalization method, two bias-extension tests with different specimen sizes were performed. The parameters for the tests are listed in Table 9-1. The tensile rates are selected so that the normalized tensile rates are the same for both tests. From the results in
As an example, the normalized load-displacement curves under various temperatures and shear rates are plotted in
In order to further validate the kinematical assumption that the yarns are inextensible, and no slip occurs in the sample during the bias-extension tests, DIC technique was applied to examine the Green strain field in the real specimen, and the result is compared with the theoretical one derived based on the same assumption, as shown in
Bending tests: The bending stiffness of the material is also needed for proper simulation of the material behavior during the forming process. However, the softness of the prepreg under the preforming temperature makes it difficult to measure the bending stiffness via the standard 3-point bending test. As an alternative, bending tests in the invention utilize self-gravity of the specimen. In the test, one end of the rectangular prepreg sample was clamped horizontally on a support as a cantilever beam, as shown in panel (a) of
In this prepreg bending test, due to the strong geometric nonlinearity, the bending stiffness could not be directly calculated from the typical beam theory. As a solution, a simulation model was utilized to calculate the bending stiffness reversely. This simulation model utilized the homogeneous material properties. The compressive modulus of the material was modified until the same end tip displacement as the experimental result was reached. Then the effective compression stiffness could be obtained to properly describe the bending behavior of the prepreg.
Surface interaction tests: Before the invention, there was a lack of a systematic study for the characterization of the interaction between two different composite prepreg surfaces, even though this interaction surely affects the prepreg deformation and fiber orientation after preforming. To address this issue, a new experimental method was developed to characterize the interaction between uncured prepreg layers during the preforming process. The focus of this method is to characterize the influence of temperature, sliding speed, and fiber orientation on the tangential interaction. The apparatus, test procedure, and result of this new friction test method are to be illustrated specifically in the following part of this section.
The test apparatus developed in the invention is demonstrated in
An infrared (IR) camera, was also included in the system to measure the surface temperature distribution and provided the average values of a selected area. During each test, temperature was adjusted until it could be maintained within +1° C. variation from the desired value by carefully changing the power of the heating stage.
For this surface interaction test, the most important parameter is the surface temperature because it affects the viscosity of the resin in the composite. During the actual preforming process, the prepreg material was heated to 50 about 80° C. in a heating chamber and then placed in a press that is at room temperature. Thus, the temperature selected for the test ranges from room temperature (24° C.) to 80° C. The second considered parameter is the relative motion speed, because during preforming, 2D sheets are deformed into 3D parts. Then the relative motion speed between the material layers varies at different locations, resulting in a different interaction strength between the prepregs due to the shear rate effect caused by the resin viscosity at the interfaces. Finally, fiber orientation effect needs to be investigated. In industrial applications, prepreg layers with different fiber orientations are stacked together to optimize product performance in all directions. Since the surface texture of the composite is anisotropic, which affects the hydrodynamic interaction between the fabric and the resin, it is also important to test the material interaction subjected to different fiber orientation combinations.
The friction test results were analyzed when the tests with all parameter combinations were complete. The uncured prepregs are quite tacky in preforming because the resin is of high viscosity at the performing temperature. As a result, the Coulomb friction coefficient between two uncured prepreg layers can be higher than 1. To avoid confusion, the interaction factor similar to the Coulomb friction coefficient was defined to indicate the intensity of the yarn-resin-yarn interaction between the two prepreg layers. The results in
Periodical changes of the interaction factor are observed from the resulting plots, especially when the resin has very high viscosity. Take the interaction factor at the steady-state stage shown in
The final results with respect to average stable stage temperature, relative motion speed, and fiber orientation combination are plotted in
The temperature effect to prepreg surface interaction is that when the temperature is higher than 50° C., the resin fully transforms to a viscous liquid state and acts like a “lubricant” between the two prepreg layers. At this stage, further temperature increases reduce the resin viscosity and enhance its lubricity during sliding, resulting in a lower interaction factor, and less frequent stick-slip with a smaller amplitude. As for the relative motion speed effect, a weak influence at temperatures below 50° C. or above 60° C. was found. But the interaction factor has a positive relation to the motion speed at about 50° C., when the viscosity of the resin reaches the peak value. In regard to fiber orientation effect, when the temperature is higher than 50° C., the interaction factor becomes larger with the 0/90/0/90 fiber orientation, than that with the 0/90/−45/+45 orientation because the transverse fiber yarns in different layers are more likely to interlock with each other with the 0/90/0/90 orientation, making it more difficult for the layers to slide. This also explains the fact that the stick-slip strength is generally larger if the fibers in both top and bottom layers are aligned with each other, especially at 50° C. when the resin viscosity reaches the peak value. However, when the temperature is below 50° C. and the resin is in the solid state, the difference between the two orientation combinations becomes less significant. This is mainly due to the fact that the sheets are still in the solid state, and that the resin fills the surface “valleys” generated by the fiber yarns to flatten the prepreg. As a result, the orientation combination does not have a large effect compared to those at higher temperatures.
Hydro-Lubricant Uncured Prepreg Surface Interaction Model
The woven fibers in the invention form a certain texture of surface topography. It has a 2×2 twill structure, as shown both by the real material photo and the TexGen software model in
Textures affect the interaction of textured surfaces. A hydrodynamic model was developed and applied to simulate and study prepreg surface interaction in the invention. In this model, the top and bottom woven fabrics were aligned to the same direction with 0/90/0/90 fiber orientation for 2D simplification. These fabrics were treated as rigid because 1) they were firmly stretched in the fiber matrix, so that the vertical deformation was minimal; and 2) the normal load was low. Relative movement of the interface can be considered by the general lubrication system illustrated in
With this hydro-lubricant model, surface interaction at various temperatures is simulated at a relative motion speed set of 10 mm/s. The comparison of numerical and experimental results is shown in
For the interaction at 60° C., the numerical calculations with various relative motion speeds were then performed. The experimental and numerical results for the average, maximum, and minimum interaction factors are plotted in
Finally, with this hydro-lubricant model, the periodic interaction factor variation demonstrated in
Experiment validation demonstrates that under certain preforming conditions, i.e., 60° C. temperature and 5-15 mm/s sliding speed for the supplied woven prepreg, interaction between two prepreg surfaces can be explained by the hydro-lubricant mechanism and predicted via the numerical method developed in the invention. The elastic deformation of the fabric and the resin mixing with inter-diffusion at various deformation rates and temperatures should be considered in the future work in order to model the prepreg-prepreg interaction more accurately and predict the interaction behavior subjected to wider conditions.
Macroscopic Non-Orthogonal Material Model for Uncured Prepreg
In the invention a non-orthogonal material model for the CFRP preforming simulation was developed, aiming to accurately predict the deformation of the uncured prepreg during preforming especially under large shear. This material model was developed in the form of Abaqus® explicit user-defined material subroutine (Abaqus VUMAT) and LS-DYNA® user-define material subroutine (LS-DYNA UMAT). Because of its ease of use and high prediction accuracy for part shape and fiber orientation, this model was incorporated into the LS-DYNA® as MAT_293 (MAT_COMPRF). The fundamentals of this model can be found following this section.
Woven CFRPs have highly anisotropic mechanical properties, with large tensile modulus (10 GPa level) along warp and weft yarns because of the stiff carbon fibers reinforcement, but small intra-ply shear modulus (0.1 MPa level). During preforming, the most dominant deformation mode is the intra-ply shear. To capture this fiber-orientation-dominant anisotropy, the material model needs to simulate tension along the yarns and shear separately, even under large shear.
Stress analysis for the woven uncured prepreg with the non-orthogonal model developed in the invention is shown in
This non-orthogonal material model was implemented into both Abaqus® and LS-DYNA®. This model enables users to directly input experimental data to define the stress-strain curves, as well as the shear locking angle, which indicates whether the shear deformation reaches to the extent that the rotation resistance between warp and weft yarns is no longer small compared to the tensile modulus of the material.
Material characterization is essential for FEM models to accurately predict behavior of woven CFRPs during preforming process. It can be seen from
When the material model and the experimental input are prepared, double-dome benchmark tests are conducted and simulated to validate the capability of the material model for a 3D shape forming, considering different yarn orientations and stacking sequences. These validation results indicate that this non-orthogonal model can partly reach the 5% error target for fiber orientation prediction.
Mesoscopic RVE Model for Uncured Prepreg
Mesoscopic RVE modeling and virtual material characterization with RVE requires building of an RVE finite element model, calibration of mesoscopic yarn properties, and generating a prepreg constitutive law as a function of strain. To build the mesh of a prepreg RVE with a fine balance between speed and accuracy, a novel 2-step geometrical modeling method was developed in the invention. In this method, the rough composite structure without yarn-to-yarn penetration is first generated by TexGen in Step 1 with the specified woven pattern and key characteristic sizes, such as weaving pattern, yarn width, yarn gap, and yarn thickness. Then, the mesh and the local yarn orientation corresponding to the structure is imported to a commercial finite element software in Step 2 to compress the structure in the thickness direction and satisfy the prepreg thickness requirements while maintaining the already assigned features. Finally, the deformed mesh and the local material orientation are exported to build the RVE for virtual material characterization.
This method was utilized to build the supplied 2×2 twill prepreg. Texture structure of the real 2×2 twill prepreg is shown in
The drawback of this thickness enlargement, however, is that many gaps, as demonstrated in panel (b) of
In addition to the RVE structure, the yarn material model should also be correctly established. Because preforming is a one-step loading process where material recovery after the deformation is not neglected, yarns within RVE models are assumed purely elastic. Prepreg yarns that include quasi-unidirectional fibers and uncured resin exhibit a transverse isotropy. Direct implementation of such material behavior, however, leads to numerical errors. One kind of error happens when compression load is applied along the width direction to a single yarn. This loading condition is common for prepregs in shear deformation where, as illustrated in panel (a) of
Based on the decoupling approach, the yarn is modeled using an anisotropic elastic constitutive law with distinct Young's and shear moduli in different directions. This constitutive law is defined in the co-rotational frame which is updated with the deformation gradient tensor to accurately trace the local fiber orientation upon large yarn deformation and rotation under the RVE deformation. In the prepreg yarns, the very stiff carbon fibers are aligned in the longitudinal direction along which the applied load is predominantly present. Meanwhile, the soft uncured resin governs the transverse deformation. Therefore, it is straightforward to decouple the yarn deformation in the longitudinal and transverse directions.
Once the structure and the material model of the RVE are generated, they are input into the finite element simulation given normal true strain along yarns, shear angle, and yarn properties. After simulation, the stress of each element is extracted and averaged to obtain the stress response of the RVE. Mechanical properties of mesoscopic yarns including elastic moduli, Poisson's ratios, and friction coefficient are difficult to directly characterize because of small sizes, single yarn specimen preparation, and soft resin. As a result, the unknown material properties are manually adjusted at this stage and the stress prediction from the RVE is compared to the experimental data. One of the best example comparisons is illustrated in
In the invention, Bayesian calibration is applied to obtain prepreg yarn properties for the first time. Uniaxial tension and bias-extension data is employed to: (1) estimate calibration parameters of the RVE model; (2) determine whether the RVE simulator is biased; and (3) build a cheap-to-evaluate emulator to replace the expensive RVE simulation in macroscale analyses. To this end, a modularized version of the Bayesian calibration framework of Kennedy and O'Hagan (KOH) is adopted. The goal of Bayesian calibration is to combine three data sources (experiments, simulations, and prior knowledge from experience in the field) to estimate the unknowns. As illustrated in
In practice, for supplied prepregs in the invention, uniform prior distributions (based on our experience) are chosen for θ which cover the entire range where η(x, θ) is fitted. Uniform prior distributions are preferred (over, e.g., normal distribution) since the range of the values that θ can take (and not, e.g., their most likely values) are known information. These ranges are chosen widely enough to ensure that the true (but unknown) calibration parameters are covered. Additionally, this choice guarantees that large variances are used to avoid diminishing the effect of the experimental data on the joint posterior distribution of θ.
Calibrated yarn property results are shown in
Multiscale Uncured Prepreg Preforming Model with Bayesian Calibration
As the constitutive law of the 2×2 twill prepreg with uncured thermoset resin, the mesoscopic stress emulator obtained from the virtual material characterization is implemented into the non-orthogonal material model to form a multiscale simulation approach. The flowchart of the developed approach is illustrated in
When the multiscale preforming simulation method is established, double-dome benchmark tests are conducted and modeled to validate the capability of the multiscale method for 3D shape forming considering different yarn orientations and stacking sequences. This validation result reveals that this multiscale method leads to a slight improvement regarding the prediction of part geometry and fiber angle distribution, with an average of 4.0% error for fiber orientation prediction, which achieves the proposal target. Moreover, the forming force prediction accuracy of this multiscale method sees a significant increase of over 26% compared to the experiment-based non-orthogonal model and it agrees very well with the experimental results.
The experimental methods to characterize mechanical properties of carbon fiber composite prepregs, the non-orthogonal material model for preforming simulation, the mesoscopic prepreg RVE finite element model with Bayesian calibration for virtual material characterization, and the multiscale simulation method for preforming developed in the invention leads to an accurate computational design and optimization approach for development of a CFRP parts preforming process. The high fidelity RVE simulation at the material structure level also provides insight guidance for the woven pattern and constituents design in composites. This approach is able to reduce the time period and material cost for the development of preforming compared to conventional trial-and-error methods, which rely heavily on real experiments. As a result, this approach enables researchers and engineers in both academic and industrial fields to invent and produce CFRP parts and corresponding manufacturing processes at a faster pace, at a lower price, and in a larger volume, broadening the application of these advanced composites and benefiting environmental emission and fossil fuel control.
The 2-step mesoscopic RVE modeling technique and multiscale simulation tool can also be commercialized and implemented into mature finite element software as a plug-in feature. For the RVE technique commercialization, the next necessary step is to integrate open TexGen software for the generation of fabric geometry/mesh with finite element software to form a complete package. As for commercialization of the multiscale tool, future needed steps include: (1) transferring the Bayesian calibration algorithm from current MATLAB code to Python or Fortran languages that can be utilized directly by finite element software; and (2) integrating RVE virtual testing model, Bayesian calibration, and part-scale preforming simulation model into a whole package.
In the preforming part of the invention, we invented a set of experimental and simulation methods that can speed up development of CFRP parts manufacturing at a low cost. This enables not only big companies, but also smaller research teams to design their own composites and corresponding manufacturing processes, which potentially increases applications and market needs for these advanced light-weight materials, reduces emission pollution from transportation industries, and motivates the carbon fiber composites production industry.
Carbon fiber composite prepregs are expensive and require special treatment both for storage (freezing) and for manufacturing (high temperature to melt resin, but not cure it). Systematically arranging physical experiments with proper deformation and temperature checking is very essential to avoid waste of raw material; and (2) Computation models, when hundreds of them need to be run for calibration, virtual material characterization, and design optimization, can be time consuming. Validation of single simulation cases in the aspect of mesh density, local material coordinate, material model, etc., is essential to ensure high efficiency for large scale computational simulations. As a summary, to save time, cut cost, and achieve neat and satisfactory results, either experiment or simulation research work requires sufficient preparation and planning, instead of simply performing them and relying on the trial-and-error method.
3. Mechanistic Continuum Models for CFRPCarbon Fiber Reinforced Polymers (CFRP), including Unidirectional CFRP (UD CFRP) and twill woven CFRP (woven CFRP), have orientation dependent material properties. Simply put, the tension responses of UD and Woven CFRP will be different depending on the loading condition. This is due to the anisotropy nature of carbon fiber, which has different elastic moduli in fiber directions and in-plane directions. Hence, to study the mechanical behavior of CFRP material, it is a necessity to model CFRP's actual microstructure for analyzing the performance of cured CFRP, including UD and woven CFRP.
The multiscale method established in the invention provides tools to model CFRP microstructure by the RVE method. Numerical testing of RVEs predicts effective elastic material properties of UD and woven CFRPs. Based on a reduced order modeling approach on RVEs, a concurrent multiscale modeling method has been established to perform efficient part-level performance prediction.
Major achievements in the UD and woven modeling are summarized as below:
-
- User-friendly UD RVE modeling package.
- User-friendly woven CFRP modeling package.
- Prediction of UD elastic constants with less than 10% difference to test data.
- Prediction of woven elastic constants with less than 10% difference to test data.
- Uncertainty quantification (UQ) for Woven RVE.
- Novel multiscale modeling concurrent with modeling for UD CFRP.
- UD part performance predictions with less than 10% difference to test data.
- Woven concurrent modeling.
The properties of CFRP composites is anisotropic and microstructure dependent. Therefore, accurate capture of CFRP's mechanical properties, such as stiffness tensor, requires 1) reconstruction of the microstructure; and 2) accurate numerical modeling. For UD CFRP, its stiffness tensor can be simplified as the volumetric average by the rule of mixture. It is possible to estimate the UD stiffness tensor by using either Voigt average or Reuss average, but the accuracy is questionable since Voigt average gives the upper limit while Reuss average gives the lower limit. In addition, it is also proposed the modeling of UD CFRP by assuming a well-structured and periodic packing pattern of fibers, such as hexagonal packing, and model UD CFRP by Representative Unit Cells (RUC). RUC provides easy modeling of UD CFRP since it only models the minimum repeating unit in UD CFRP and it allows different packing patterns and fiber volume fractions. RUC can be easily modeled in finite element mesh and allows one to compute effective UD elastic properties without dealing with algebra, compared to the analytical homogenization approach. Unfortunately, in real UD CFRP product, carbon fibers are of a random distribution. Therefore, the RUC model might not provide accurate information about the UD's properties. For woven CFRP, RUC can be used for modeling mechanical properties of woven composites. However, due to the aforementioned assumption, RUC is not the ideal way of modeling woven CFRP when one wishes to capture certain microstructure variation. In order to include realistic microstructure, a better modeling technique needs to be identified and implemented so the properties of UD and woven CFRP can be predicted.
In the invention, the Representative Volume Element (RVE) approach is adopted to faithfully model UD's microstructure. The RVE model of UD captures the random distribution of fibers in the matrix material and can be used for finite element analysis. Building a UD RVE allows one to consider arbitrary fiber distribution and fiber shapes. It is expected that UD RVE can give a good prediction of UD's mechanical properties compared to test data. For woven composites, the RVE model allows one to capture a larger region with multiple fiber tows in warp and weft directions, and an accurate prediction of woven mechanical performance can be made. In addition, woven RVE enables uncertainty quantification of woven CFRP, where the effect of fiber volume fraction and fiber misalignment in fiber tow can be quantitatively analyzed. Hence, the RVE approach provides a clear structure to the property map between the UD and woven composites and their mechanical properties. The current scope is to use RVEs for accurate prediction of elastic stiffness tensors for both UD and woven composites.
Moreover, the UD RVE can be applied into a recently proposed Reduced Order Modeling (ROM) method, namely Self-consistent Clustering Analysis (SCA). SCA allows one to compress UD RVE from many voxel elements into a ROM database made with several clusters. The ROM of the UD can be solved using the SCA method, hereafter called “UDSCA”, to compute elasto-plastic responses of UD in an efficient manner. UDSCA not only provides an efficient way to compute mechanical responses (elastic and plastic) of UD, but also links UD microstructure to UD part performance. A concurrent multiscale modeling framework is established for UD material for the first time and it can be used for structural property prediction.
The present goal of UD and Woven modeling is to develop a complete modeling workflow that allows the user to generate UD and woven microstructures and extract elastic material properties, especially the stiffness tensors. It allows one to build a UD or Woven microstructure as an RVE in a finite element mesh. Microstructure information, such as UD fiber volume fraction and yarn orientation in woven composites, can be assigned by the user. The finite element mesh can be used to perform traction free loadings in three normal directions and three shear directions. RVE effective stress and strain results from all six loadings will be used to compute stiffness tensor, in a 6 by 6 matrix. The present process provides direct numerical homogenization of CFRP's material properties. Young's moduli and shear moduli can be computed and compared against experimental results. The expected different between UD and Woven elastic constants from RVE models and test data is less than 10%.
Moreover, an uncertainty quantification workflow is established, for the first time, for woven CFRP. In this workflow, variations of the woven microstructure can be modeled in woven CFRP. Several microstructure variations, such as yarn angle, fiber misalignment, and fiber volume fraction are considered. The effect of those variations on effective woven elastic material properties can be measured quantitatively.
For UD and Woven CFRP, an efficient reduced order modeling approach, namely self-consistent clustering analysis (SCA), is applied to reduce the computational cost of RVE computation. This allows one to compute the responses RVEs on-the-fly and enables a concurrent multiscale modeling framework. The multiscale modeling framework establishes a concurrent multiscale modeling framework where prediction of macroscale structure performance is made possible. Test cases of UD CFRP structure will be presented with experimental validation. It has broad potential in the evaluation of CFRP structure performance through numerical models and can be used for future CFRP structure design.
UD RVE Modeling
In the ICME process, a bottom-up by a multiscale modeling approach is adopted for CFRP. As shown in
An image of cured UD microstructure used in the invention is given in
-
- 1) 51% overall fiber volume fraction;
- 2) Fiber has a circular shape with a diameter of 7 μm;
- 3) Fibers are perfectly straight;
- 4) RVE has a square cross-section, with a length larger than 70 μm.
The process of building a UD RVE is similar to packing multiple fibers into a square domain. Since all fibers are assumed to be perfectly straight, it is convenient to reduce the 3D domain into a 2D domain, where multiple circles are packed into a square. This 2D domain can be extruded in the fiber direction to form the final 3D UD RVE. Here, the UD RVE geometry is 84 μm by 84 μm by 2.8 μm, as shown in
The UD RVE can be used to predict elastic stiffness tensor of UD. The material properties of fiber and matrix are given in Table 9-6. Since the primary focus is computing elastic material properties of the UD, only elastic material properties of fiber and matrix are needed.
To compute the UD stiffness matrix, it is necessary to review the definition of general monoclinic material. The general strain and stress relationship of monoclinic material is given as below, where Voigt notation is used. For UD RVE, due to its anisotropic nature, we have: S12=S21, S13=S31, S23=S32, S22=S33, S44=S55, S14=S41=0, S24=S42=0, S34=S43=0, and S56=S65=0. To compute individual entries of the S compliance matrix, one needs to perform loading on the UD RVE such that only the stress in the loading direction is non-zero. This so-called orthogonal loading condition allows one to compute S column by column. Therefore, loadings in 11, 22, 33, 12, 13, and 23 directions need to be performed. Once the full compliance matrix is constructed, the stiffness tensor is merely the inverse of the compliance matrix.
Once the UD RVE is generated, the mesh can be used in FE software, such as ABAQUS, to perform loadings in six loading directions mentioned above. Since both fiber and matrix are assumed to be elastic materials, the UD RVE responses will be strictly elastic. The UD effective stress and strain are computed using Eq. (9-9) shown below, where σmicro and εmicro are stress and strain tensors of each voxel element in the UD RVE.
Once
The quantities of interest of the UD RVE are the Young's and shear moduli, as well as Poisson's ratios in different directions. The conversion between UD compliance matrix and elastic moduli can be found using the following equivalence:
The elastic material constants of UD RVE is summarized in Table 9-7, along with the experimental results. Most of the differences are all within 5% of the experimental data, outperforming the original proposed target of this work. Note that there has a relative large difference for shear moduli G12 and G13 between the experiment and prediction, but it is still within the target of 10% difference.
The current package assumed fiber geometry in a circular shape, but it can be of an arbitrary shape. Also, the present circle packing method is considering complete random fiber distribution, but this limits the maximum fiber volume fraction to be 60%. To achieve higher fiber volume fraction, the algorithm needs an extra function that can rearrange fiber locations in order to exceed the 60% limit.
The generated UD RVE mesh usually has a considerable number of voxel elements, more than 1e6. Hence, UD RVE computation is most suitable for High Performance Computing cluster when finite element method is used. Since the UD RVE is discretized by voxel elements, this voxel mesh is essentially a 3D image. The Fast Fourier Transformation (FFT) based homogenization scheme is a favored algorithm to input voxel mesh and compute the overall stress and strain responses. If the FFT based homogenization method is adopted, one can compute RVE elastic responses using a single workstation.
In short, the UD RVE model has established a convenient work-flow that allows one to build UD RVE with desired fiber volume fraction. All predicted UD elastic properties either met or exceed the project requirement of 10% in difference.
Woven RVE
The woven RVE generation utilized TexGen, an open source software that allows one to build a textile structure at any given pattern and fiber tow (or yarn) geometry. In the invention, the woven CFRP is made of twill woven. The minimum repeating unit of the twill woven includes four wrap and four weft fiber yarns. The woven RVE generated is shown in
Once the mesh of woven RVE is generated, it can be used in FE software to perform numerical homogenization to obtain its elastic material constants. Here, the matrix material has the same material properties. Fiber yarn property is assumed to be the same as UD CFRP with 65% fiber volume fraction. Due to the aforementioned limitation, the fiber yarn properties are computed using the analytical approach.
To analyze the elastic responses of the woven RVE, the matrix forms the stiffness tensor (due to the usage of Voigt notation) needs to be computed using six orthogonal loading conditions, same as the UD RVE. The stiffness matrix can be computed conveniently once the compliance matrix is computed. The only difference is that for woven RVE all components of the compliance matrix listed in Eq. (9-8) need to be computed.
The computed effective elastic properties of woven RVE are listed in Table 9-8 below. All prediction met with the proposed 10% difference compared with experimental data.
The advantage of using a woven RVE numerical model is that various microstructure uncertainties can be addressed in the RVE model and quantitative analyses can be done to understand the effect of those uncertainties. In this work, uncertainty quantification for woven CFRP is introduced for the first time. The uncertainty quantification allows one to address uncertainties resulted from various manufacturing processes, such as pre-forming and curing. By giving a quantitative measurement of uncertainty effect, it is possible to link manufacturing process to the final CFRP performance, which is an important part of the ICME process. Here, the woven RVE also be used to examine the effect of three woven microstructures as shown in
The effect of yarn angle is studied by constructing woven RVEs with various yarn angle α shown in
From
Besides yarn angle, yarn fiber volume fraction, denoted as Vf, and yarn fiber misalignment effects are also investigated. The results obtained in Table 9-8 consider neither uncertainty, meaning the yarn material is homogenous, which is usually not the case for real material manufactured due to manufacturing process variations. As shown in
Moreover, each voxel element contains a local material orientation that aligns with the yarn center line for homogeneous material. Fiber misalignment is considered as the deviation from perfect alignment direction. Shown in
Angle θ (0°≤θ≤90°) and Φ (−180°≤Φ≤180°) are used to establish misaligned fiber direction {right arrow over (f)}1, {right arrow over (f)}1, {right arrow over (f)}2, and {right arrow over (f)}3 represent transverse isotropic material frame accounting for fiber misalignment. Equations for calculating {right arrow over (f)}1, {right arrow over (f)}2, and {right arrow over (f)}3 given as below:
For fiber misalignment, θ and φ follow gaussian distribution by letting mean
In summary, woven RVE modeling provides a straightforward numerical analysis tool for studying woven CFRP mechanical properties, where all predictions are within 10% difference compared to test data. Moreover, the test cases of woven uncertainty parameters illustrate the microstructural effect in woven CFRP. Woven UQ provides a convenient numerical solution to evaluate possible uncertainties caused by different manufacturing processes.
Reduced Order Modeling of UD CFRP
Aforementioned ud package is able to generate a UD RVE in voxel mesh and allow the user to analyze the mechanical responses of UD CFRP. However, the high computational cost associated with the fine voxel mesh requires certain reduced order model (ROM) techniques to achieve 1) faster RVE responses computation; 2) linking UD RVE to large-scale part-level model for part performance prediction with experimental validation (within 10% difference).
A recently proposed reduced order modeling method, namely self-consistent clustering analysis (SCA), is a promising method for building ROM for arbitrary voxel mesh, including UD RVE. It is based on the FFT homogenization scheme. In FFT homogenization scheme, strain tensor at each voxel is treated as a combination of overall imposed strain εMacro and a polarization strain {tilde over (ε)}, shown in Eq. (9-14) below
ε(X)={tilde over (ε)}+εMacro (9-14)
Above equation, also known as Lipmman-Schwinger equation, allows one to solve local strain responses ε(X) when εMacro is fixed. This is the basic of fft homogenization method, which is time consuming since the evaluation happens for all voxel elements. Eq. (9-3) can also be written as Eq. (9-15) as below
εMacro−ε(X)−∫ΩΓ0(X,X′):[σ(X′)−C0:ε(X′)]dX′=0,X∈Ω (9-15)
Liu et. al. proposed a reduced order modeling approach by re-discretizing the voxel mesh into several clusters. Assuming the original voxel mesh contains N elements, the mesh can be compressed into K clusters, where N>>K. Eq. (15) is reformulated as Eq. (9-16) as shown below.
Eq. (9-16) can be easily solved using Newton's method. Since N>>K, Eq. (9-5) is a much smaller linear system to solve than Eq. (9-15).
To apply SCA to UD RVE, the first step is to build the UD RVE database. This involves two steps:
-
- 1. Compressed original RVE from voxel mesh into clusters.
- 2. Compute interaction tensor DIJ between all cluster pairs.
Once the RVE is compressed, each voxel will be labeled with a cluster. This is illustrated in
Once the UD database is built, Eq. (9-16) is solved to compute stress and strain responses in each cluster when an external loading is given. This rom, hereinafter referred as udsca, can be used to compute elasto-plastic responses of UD RVE in a timely fashion. A numerical verification of UDSCA is performed as shown in
For UD 2-scale concurrent modeling, it follows the schematic shown in
UD Off-Axial Coupon Tensile Concurrent Modeling
Next, UDSCA is applied to a coupon off-axial test model to perform concurrent multiscale modeling. For a realistic representation of the epoxy matrix, a paraboloid yielding surface is applied, where the tension and compression curves are extracted from
Through the coupon test cases, two important problems are addressed:
-
- (1) Computing material microstructure evolution on-the-fly by realistic RVE.
- (2) Prediction of CFRP part performance using the multiscale method.
For the coupon model, exact geometry from NIST is used, as shown in
The simulation took 2560 CPUS hrs to complete. The stress vs. strain curve in the y-direction is computed and compared with the experimental result, as shown in
In addition,
UD Crash Test Concurrent Modeling
The UD crash test setup is shown in panel (a) of
The impactor force vs. Time is shown in panel (b) of
UD Dynamic 3-Pt Bending Concurrent Modeling
The udsca is also applied to the ud hat-section 3-pt bending model. The model is shown in
After a total displacement of s, failure in UD laminae is observed. Peak load and peak impactor acceleration are reported in Table 9-13, where the difference is within 10% compared to test data.
Reduced Order Modeling of Woven CFRP
A 3-scale concurrent modeling for woven RVE has been established. The scheme is illustrated in
The geometry details of woven RVE is shown in
A comparison between 2-scale and 3-scale single element simple shear tests is performed. Stress and strain results are shown in
If the yarn nonlinearity (such as plastic behavior) is not of interest, it is also possible to replace UD RVE with a set of elastic constants of UD with a fiber volume fraction of 60%. This will reduce 3-scale concurrent model to 2-scale concurrent model for woven, where the only matrix is modeled as elasto-plastic material. An orthogonal woven biaxial tension test shows almost linear stress and strain curves. Hence, a 2-scale concurrent modeling of woven bias tension simulation is performed. The test setup is shown in
Benefits Assessment
The UD and woven CFRP RVE modeling packages provide alternative solutions to investigate CFRP mechanical properties. It can be applied to different constituents and predict elastic stiffness tensors of UD and woven composite. The UQ function of woven RVE allows one to link manufacturing process parameters to final product performance.
Moreover, the UD and woven concurrent multiscale modeling provides an accurate and efficient prediction of part-level product performance. It can be applied to a virtual verification platform where the concept design is evaluated. It allows optimization of the new design and can significantly reduce the number of costly experiments. The potential clients of this technology can be broad, as long as there is a need for developing new composite applications.
In the multiscale modeling work, it is understood that microstructure plays an important role in modeling CFRP materials. An efficient reduced order modeling method sca is introduced to integrate CFRP microstructure (UD and woven) into the part-level model to predict structural performance.
The next step will involve investigation of modeling of UD dynamic problems, such as ud hat-section crash and dynamic 3-pt bending. Current models suffer from numerical instability due to high loading rate. It is expected to use different stabilization methods to improve the concurrent scheme for better accuracy.
4. Stochastic Multi-Scale CharacterizationOur research enables investigating the variability of part properties and behavior as a function of uncertainty sources at multiple length-scale and, subsequently, identifying the most important uncertainty sources that should be monitored during manufacturing. The developed methods and tools enable modeling spatiotemporally with varying uncertainty sources and, additionally, couple structural and material-related uncertainties across different length-scales. We achieve these by introducing the Top-down sampling method that builds nested multi-response Gaussian processes to parsimoniously quantify the random fields and, hence, the underlying physical uncertainty sources. Our approaches can be easily used to conduct sensitivity analyses for dimensionality reduction, i.e., identifying the most important uncertainty sources as well.
Compared to prior model-based UQ research, the UQ study of UD composites in the invention is image-based and microstructure-oriented. Two sources of uncertainty are considered: fiber waviness and fiber spatial distributions, both of which can be characterized from microscopic images provided by Ford. Machine learning and applied statistics methods are utilized to develop image analysis tools to extract information about the variations of the uncertainty sources, and generative statistical models are constructed for generating realistic random samples with variations mimicking our observations from the image data. For fiber spatial distribution, tree regression is used for image characterization and a hierarchical nonparametric sampling method is developed to sample the realizations from a nonstationary and nonhomogeneous RF. The local fiber waviness is obtained from images through a specially designed segmented regression algorithm, and new random samples are generated via a frequency domain time series analysis approach. Finally, the joint sampling of the two quantities, in which spatial constraints exist, are discussed and the corresponding codes are implemented.
The developed computational method and tools are applicable to many material systems and the corresponding multiscale material simulators. We have demonstrated their effectiveness in modeling uncertainty sources in unidirectional and cured woven composites. Our computational methods and tools can be validated against experimental results once they are available. In the case with cured woven composites, we demonstrate how various uncertainty sources such as yarn angle and fiber misalignment, which are introduced at, respectively, mesoscale and microscale, can affect the part performance during operational conditions. Our results indicate that, even in linear analyses, such uncertainty sources could have significant impacts on the results. With the UD UQ tools, random samples that represent the variations in the real UD material can be generated for further computational study, including their impact on material properties and part performance.
Our contributions are the first to investigate the uncertainty in multiscale material simulations which allows the systematic study of the effect of uncertainty to (i) engineer more reliable materials, and (ii) reduce the manufacturing costs by only monitoring the main uncertainty sources. Composite vehicle components can be optimized considering the impact of uncertainty, yielding a safer yet lighter design.
The Gaussian Process modeling tool developed during the project has been generalized into a user-friendly graphical user interface. The advantage of this tool is twofold; (i) The simplicity yet complimentary user interface allows engineering teams company-wide to benefit from this powerful tool, and (ii) Gaussian process models can reduce the effective simulation turn-around time from days to seconds, enabling the use of these models for uncertainty quantification and propagation purposes as well as design optimization.
Uncertainty sources are generally categorized as aleatory and epistemic. While the former uncertainty source is inherent to the system (and hence irreducible), the latter is generally due to lack of knowledge or data, and may be reduced by conducting more simulations, experiments, or in-depth studies. In the case of materials, both sources are present and may be introduced in the design and constituent selection stages, manufacturing processes, or operation. Such uncertainties manifest as, e.g., mechanical (e.g., Young's modulus, Poisson ratio, yield stress, damage evolution parameters, etc.) or geometrical (e.g., reinforcement distribution, fiber misalignment in composites) variations. To elaborate more on material uncertainty, we take woven fiber composites as an example. These materials have been increasingly used in aerospace, construction, and transportation industries because of their superior properties such as high strength-to-weight ratio, non-corrosive behavior, enhanced dimensional stability, and high impact resistance. Woven fiber composites possess, as illustrated in
Fibrous composites have been previously investigated to determine how much their properties and performance are sensitive to uncertainty. The focus of these works, however, has not been placed on rigorously modeling the uncertainty sources and statistically propagating their effects across multiple scales. For instance, modeling spatial variations via RFs, connecting them across different spatial scales, and investigating stochastic simulations are often neglected. Savvas et al. studied the necessary RVE size as a function of spatial variations of fiber volume fraction and yarn architecture. Their research showed that the RVE size should increase at higher fiber volume fractions. They also concluded that the mesoscale RVE size is more affected by fiber orientation than waviness. Their further studies illustrated that geometrical characteristics (i.e., the shape and arrangement of the fibers) and the material properties (Young's moduli of the constituents) affect the homogenized response in UD composites quite significantly (with the former being more important). The variations were shown to decrease as the number of fibers and RVE size increased. Average axial and shear stiffness constituted the response in these studies. Vanaerschot et al. studied the variability in composite materials' properties and concluded that the stiffness in the mesoscale RVE is affected by the load orientation and, additionally, it significantly decreases as the fiber misalignment increases. Hsiao and Daniel experimentally and theoretically investigated the effect of fiber waviness in UD composites and demonstrated that it decreases composite's stiffness and strength under uniaxial compression loading. Komeili and Milani devised a two-level factorial design at the mesoscale to study the sensitivity of orthogonal woven fabrics to the material properties and yarn geometry. They illustrated that, based on the applied load, these parameters could have a significant effect on the global response (i.e., reaction force). A similar sensitivity study based on Sobol's indices was conducted in to demonstrate that the friction coefficient and yarn height significantly affect the macroscale mechanical response of interest in dry woven fabrics. Yarn properties are spatially homogeneous and there is no fiber misalignment.
To address the shortcomings of the prior works on UQ in woven composites, we employ RFs which are collections of random variables indexed in either time or space. We introduce the Top-down sampling method that builds nested RFs by treating the hyperparameters of one RF as the responses of another RF. This nested structure allows us to model non-stationary and C0 (i.e., continuous but not differentiable) RFs at fine length-scales (i.e., mesoscale and microscale) with a stationary and differentiable RF at the macroscale. We motivate the use of multi-response Gaussian processes (MRGPs) to parsimoniously quantify the RFs and conduct sensitivity analyses for dimensionality reduction. The resulting approach is non-intrusive (in that the computational models need not be adapted to account for the uncertainties) and can leverage statistical techniques (such as metamodeling and dimensionality reduction) to address the considerable computational costs of multiscale simulations.
For UQ of UD composites, we focus on two microstructural uncertainty sources that can be captured by imaging techniques: fiber distribution and fiber waviness. Microstructure images of UD plates are taken at Ford, from which we measure these two quantities of interest (QoI) then model their variation with statistical methods. We address the challenge of simulating a nonstationary and nonhomogeneous RF for fiber distribution modeling by introducing a hierarchical nonparametric statistical model. For fiber waviness, the image data of which is extremely limited, assumptions such as stationarity are made and a time series approach is applied to generate realistic samples from the underlying distribution. The two methods are integrated into a data-driven sampling algorithm that can simulate the spatial distributions of the two QoIs simultaneously for further computational mechanics analysis.
The developed methods and tools are applied to fiber composites which have been increasingly used in aerospace, construction, and transportation industries due to their superior performance. Our contributions, hence, have far reaching impacts on various sectors of the economy.
Multiscale UQ and UP with Application to Cured Woven Composites
Our approach for multiscale UQ and UP has two main stages: Intra-scale UQ and inter-scale UP. We start by identifying the uncertainty sources at each scale and modeling them via RFs where one RF is associated with each structure realization. We employ RFs with sensible (i.e., physically interpretable) parameters for three main reasons: (i) To couple uncertainty sources across length-scales and enable their propagation from lower to higher scales, (ii) To connect the most important parameters of the RFs to the features of the material system and hence identify the dominant uncertainty sources in a physically meaningful way, and (iii) To allow for a non-intrusive UQ procedure by directly using the RFs' outputs in the multiscale FE simulations (instead of adapting the FE formulations for UQ and UP). Due to these reasons, we use the best linear unbiased predictor (BLUP) representation of multi-response Gaussian processes. MRGPs enable sensible characterization of uncertainty sources, are flexible and computationally efficient, and can be easily trained via available data.
At this point, the dimensionality in the UQ and UP tasks has been reduced from the number of degrees of freedom in the multiscale simulation to the few hyperparameters of the MRGP at the coarsest scale. However, depending on the material system and quantities of interest, generally not all the hyperparameters need to be considered in the UP process. Hence, further dimensionality reduction can be achieved by identifying the dominant uncertainty sources and, equivalently, the corresponding RF parameters through, e.g., sensitivity analysis.
The second stage of our approach starts by replacing the nested simulations at fine scales via inexpensive but accurate metamodels (aka surrogates or emulators) to decrease the computational costs of a single multiscale simulation from hours (or even days) to minutes. The choice of the metamodel, its inputs, and its outputs depend on the nature of the FE simulation. Finally, the uncertainty at the highest scale is quantified by propagating the uncertainty from all the finer scales in the UP process. During UP, various multiscale simulations are conducted where for each simulation one realization of the spatially varying quantities are used in the multiscale material. To generate each of these realizations, we introduce the Top-down sampling approach where realizations are assigned to the spatially varying parameters from the coarsest scale to the finest scale in the material system. This sampling strategy enables modeling (i) non-stationary and C0 (i.e., continuous but not differentiable) spatial variations at the fine scales, and (ii) partial correlations between the various uncertainty sources within and across scales. Although the top-down sampling method can be integrated with any analytical RF, we have chosen MRGPs since they are sufficiently flexible and possess a few hyperparameters which are all physically interpretable. Additionally, other RFs can sometimes be converted into GPs upon appropriate transformations. Our approach is demonstrated for a composite with two length-scales in
Multi-Response Gaussian Processes for Uncertainty Quantification
MRGPs are widely popular in RF and surrogate modeling and have been used in a wide range of applications including UQ, machine learning, sensitivity analyses of complex computer models, Bayesian optimization, and trac Table 9-Bayesian calibration. For an RF with q outputs y=[y1, yq]T and the field (e.g., spatial or temporal) inputs x=[x1, , xd]T, the BLUP representation of an MRGP with constant prior means reads as:
yaboutq(β,c(x,x′)), (15-17)
where q represents a q-dimensional Gaussian process, β=[β1, , βq]T is the vector of responses' means, and c(x,x′) is a parametric function that measures the covariance between the responses at x and x′. One common choice for c(x,x′) is:
c(x,x′)=Σ⊗ exp{Σi=1d−10ω
where Σ is a q×q symmetric positive definite matrix that captures the marginal variances and the covariances between the outputs, d is the dimensionality of the field, ω=[ω1, , ωd]T are the so-called roughness or scale parameters that control the smoothness of the RF, and ⊗ is the kronecker product. Note that the dimension of β and Σ depends on q, while that of ω depends on d. The parameters β, Σ, and ω are called the hyperparameters of an MRGP model and, collectively, enable it to model a wide range of random processes:
-
- The mean values of the responses over the entire input space are governed by β.
- The general correlation between the responses (i.e., yi and yj, i≠j) over the input space is captured by the off-diagonal elements of Σ.
- The variations around the mean for each of the responses are controlled by the diagonal elements of Σ.
- The smooth/rapid changes of the responses across the input space are controlled by ω.
In case some experimental data are available, all the hyperparameters of an MRGP model can be estimated via, e.g., the maximum likelihood method. Otherwise, as in this work, expert or prior knowledge can be used to adjust these parameters and model a spatially varying quantity. Once these hyperparameters are determined, generating realizations from an MRGP model is achieved through a closed-form formula.
Top-Down Sampling for Uncertainty Propagation
To carry out one multiscale simulation, material properties must be assigned to all the IPs at all scales where the IP properties at any scale depend on an RVE at the next finer scale (this RVE itself has many IPs). Since these properties depend on the uncertainty sources (or, equivalently, on the RFs), the latter must be coupled across the scales. Recall that, due to the multiscale nature of the structure, the number of RFs significantly increases at the fine scales because we associate an RF to each structure realization.
Having used RFs whose parameters are physically sensible and can be directly linked to the uncertainty sources, this cross-scale coupling is straightforward and can be achieved with top-down sampling where the outputs of the MRGP at each IP of a particular scale serve as the hyperparameters of the MRGP of the RVE associated with that IP. This process constitutes nested RFs. To assign values to the IP parameters in the entire multiscale structure, this approach starts from the coarsest or top scale and hence the name top-down sampling.
While the Top-down sampling method works with any parametric RF representation (e.g., PCE or KL expansion), we highly recommend employing compact representations that include a few hyperparameters. This is mainly because the number of hyperparameters at the coarse scales increases rapidly as the number of spatially varying quantities increases at the fine scales. For instance, assuming three (two) quantities change spatially in a 3D microstructure, an MRGP with 12 (8) hyperparameters is required. To model the spatial variations of these 12 (8) hyperparameters at the mesoscale, an MRGP with 93 (47) hyperparameters is required.
Case Study on Cured Woven Fiber Composites
We now follow the steps of our approach to quantify the macroscale uncertainty in the elastic response of a cured woven composite as a function of spatial variations in seven uncertainty sources: fiber volume fraction and misalignment, matrix and fiber modulus, and yarns' geometry parameters (i.e., yarn angle, height, and spacing). As illustrated in panel (a) of
The geometry of these woven plies is obtained via the bias-extension simulation of woven prepregs using the non-orthogonal constitutive preforming model. While the bottom of the sample is clamped, the other end is pulled by 1 mm to generate the bias tension deformation. In the macroscale simulations, 3D solid continuum elements are employed to discretize the structure. As our focus is on UQ and UP, at this point we have assumed that only elastic deformation occurs.
Uncertainty Sources
Longitudinal fiber and matrix moduli, Ef and Em, are the first two uncertainty sources. Given the moduli, the yarn material properties primarily depend on two parameters: fiber volume fraction (in yarn), v, and fiber misalignment. While in most previous works v is postulated to be spatially constant, in practice, it varies along the yarn path particularly where yarns have compact contact. Consequently, our next uncertainty source arises from the spatial variations of v which starts from the microscale and propagates to mesoscale and macroscale. In this work, we have assumed that 45%≤v≤65% based on our material system.
During the manufacturing process, the fibers in the yarn deviate from the ideal orientation and render the cross-section of the yarn heterogeneous and anisotropic. These deviations result in fiber misalignment which is different from the concept of fiber waviness in that a fiber can be perfectly waved without misalignment. As illustrated in
In modeling the mesoscale woven composites, the yarn architecture is often presumed to be perfect where the yarn angle, α, is set to 90° and the yarn height, h, and spacing, s, are fixed to their nominal values. These assumptions do not hold in practice due to the large in-plane shear deformation during preforming process and manufacturing imperfections. Hence, we also investigate the effect of the spatial variations of the woven RVE architecture (α, h, and s) on the macroscopic properties.
Lastly, we note that in our example the deterministic spatial variation of α in a perfectly manufactured composite is, as opposed to the other parameters (i.e., [v, φ, θ, h, s]), available from the preforming process simulation. This deterministic variation is used as the prior mean (β in Eq. (9-17)) of spatial distribution of α while for the other six parameters the nominal values are employed as the (spatially constant) prior mean. In all seven parameters, the posterior spatial variations are stochastic.
We employ the computational homogenization technique for modeling the multiscale woven sample where the material property at any length-scale is calculated through the homogenization of an RVE at the lower scale. At the microscale, the RVEs include 300×300×60 voxels (42 μm×42 μm×8.4 μm) and the fibers have a diameter of 7 μm. The simulations are elastic where periodic boundary conditions (PBCs) are employed. It is assumed that the fibers and the matrix are well bonded and there are no voids. To obtain the stiffness matrix, C, of the UD RVE, six stress-free loading states are applied (i.e., only one of the εxx, εyy, szz, εxy, εxz, and εyz strain components are applied in each case). Since the simulations are elastic, C mainly depends on the volume fraction, v.
At the mesoscale, the open source software TexGen is used to create the geometry and mesh for the 2×2 twill woven RVE with 8 yarns. The space between the yarns is filled with matrix and voxel meshes are used to discretize the RVE where each voxel is designated to either a yarn or the matrix. To balance cost and accuracy, we have used a voxel mesh with 625000 elements. To reduce the computation costs, PBCs are employed throughout.
The nominal properties of carbon fibers and epoxy resin were taken from manufacturer's data (see Table 9-14). The resin is isotropic, and its material properties are taken from pure epoxy. Yarns with well-aligned fibers are treated as transversely isotropic. With fiber misalignment, however, yarns are not transversely isotropic since the material frame across the IPs on their cross-section is non-uniformly distributed. In this case, the micro-plane triad model is employed to account for fiber misalignment by defining an orthotropic micro-triad, {right arrow over (ƒ)}k, for each IP of the yarn. This triad is related to the local frame, {right arrow over (g)}k (see
where |⋅| and {circumflex over ( )} denote, respectively, the norm of a vector and the cross product. As for the local frame {right arrow over (g)}k, it is automatically generated by TexGen for each IP (each voxel at the mesoscale) once the woven RVE is discretized. We note that, the stiffness matrix at each yarn material point is obtained via the UD-RVE homogenization.
To link the mesoscale and macroscale, the stress-strain relations for effective elastic material properties of woven RVE are required. This relation can be written in terms of the symmetric mesoscale stiffness matrix as:
Top-Down Sampling, Coupling, and Random Field Modeling of Uncertainty Sources
To help clarify the descriptions, we first introduce some notation. We denote the three scales with numbers: 1→Macro, 2→Meso, 3→Micro.Superscript and subscripts denote, respectively, scales and IPs. Variables with a bar represent averaged quantities over all the IPs at a particular scale. For instance, vi1 denotes the fiber volume fraction assigned to the ith IP at the macroscale.
represents the average misalignment (zenith) angle at the mesoscale for a woven RVE.
The uncertainty sources in our composite are summarized in Table 9-15. While some sources are only defined among different structures (under spatial variations across realizations), others possess an extra degree of variation in that they also change within structures.
Assuming the eight tows in a woven RVE are statistically independent and the spatial variations within them originate from the same underlying random process, a total of 12 hyperparameters are required to completely characterize the spatial variations of [θi2, φi2, vi2] by an MRGP (see Eq. (9-17)): three mean values (β=[βv, βφ, βθ]T), six variance/covariance values for Σ([σvv2, σφφ2, σθθ2, σvφ2, σvθ2, σφθ2]), and three roughness parameters (ω=[ωx, ωy, φz]T where xyz denotes the cartesian coordinate system at the mesoscale). Once these parameters are specified, the spatial coordinates of the IPs in a woven RVE can be used to assign realizations of v, φ, and θ to them. For each IP at the macroscale, however, these 12 hyperparameters serve as some of the responses of the macroscale MRGP whose other responses correspond to [Ef
Dimension Reduction at the Mesoscale Via Sensitivity Analysis
By modeling the spatial variations via RFs, the dimensionality of the UQ and UP problem has decreased to the number of RF hyperparameters. Although this is a significant reduction, the considerable cost of multiscale simulations (even in the linear analysis) renders the UQ and UP process computationally demanding. To address this issue, we note that depending on the property of interest a subset of uncertainty sources are generally the dominant ones in physical systems. Since our composite undergoes an elastic deformation, we expect a small subset of the uncertainty sources (i.e., RF hyperparameters) to be important.
We conducted multiscale sensitivity analyses to determine which of the 12 hyperparameters of an MRGP model are the most important ones (and must be considered in UP) based on their impact on mesoscale material response. Our studies included changing one of the hyperparameters (while keeping the rest of them fixed) and conducting 20 simulations to account for the randomness. Then, if the variations in the homogenized response (effective moduli) were negligible, the hyperparameter was deemed as insignificant and set to a constant thereafter. All the simulations in this section were conducted on a woven RVE with α=90°.
We found that the homogenized moduli are affected by neither the six covariance/variance values (i.e., [σvv2, σφφ2, σθθ2, σvφ2, σvθ2, σφθ2]) nor the three roughness parameters ω=[ωx, ωy, ωz]T. In case of average values (i.e., β), the average fiber volume fraction (
It is noted that since we are interested in the elastic response of the multiscale composite in
Replacing Meso and Microscale Simulations Via Metamodels
To further reduce the multiscale UQ and UP costs, we employ metamodels to replace the micro and mesoscale FE simulations corresponding to each macroscale IP. In particular, the metamodel captures the macroscale spatial variations of the stiffness matrix of the woven RVEs associated with the macroscale IP's as a function of yarn angle (α2), average volume fraction (
where ŷ=[ŷ1, , ŷ30] and y=[y1, , y30] are obtained from, respectively, the metamodel and FE simulations. The prediction error of each model is illustrated in
Graphical User Interface (GUI) for Optimal Sampling and Metamodeling
As metamodeling is a broadly applicable tool (also outside the field of stochastic multiscale modeling), two user-friendly Graphical Interfaces have been developed: Optimal Latin Hypercube Sampling (OLHS), and Gaussian Process modeling. The ideology behind these tools is to be functional and complete, while being intuitive enough for novice users. Furthermore, the graphical interfaces have been developed using Matlab Guide and can be run on any 64-bit computer under the windows operating environment.
As shown in panel (a) of
Results on Macroscale Uncertainty
Multiple macro simulations are conducted where θi1, vi1, Ef
Since in reality the uncertainty sources coexist, in case 8 we consider the effect of all the uncertainty sources and their correlation. Here, the individual variances (diagonals of macroscale MRGP, Σ) are the same as in cases 1 through 7 while the covariances are chosen to reflect the negative correlation between the fiber volume fraction and both yarn and fiber misalignment. To this end, we choose [σαv, σαθ, σvθ]=[−9, 3, −3] and set the rest of the off-diagonals of Σ to zero. We note that σαv is negative to model the increase in fiber volume fraction as the yarns get closer after preforming. σvθ is also negative to consider the decrease in misalignment angle in rich fiber regions.
Panel (a) of
To illustrate the effect of spatial variations on local behavior, we compare the average and standard deviation of the von-Mises stress field over the mid-section of the structure in panels (b)-(c) of
The highest variations among the simulations of a specific case are observed in case 8, where all the parameters change spatially and relatively, see panel (c) of
Image-Based Microstructural UQ of UD Composites
The UQ work for UD composites aims at statistical modeling and reconstruction of the material microstructural features, including (1) non-uniform fiber spatial distribution and (2) fiber waviness. We build our models based on microstructure images of UD plates taken at Ford Motor Company. Several machine-learning- and applied-statistics-based approaches are developed for image characterization, information retrieval and generative model building. For the spatial fiber distribution, which exhibits both non-stationarity and non-homogeneity, we first model the data (image) via a tree-regression algorithm then a hierarchical nonparametric sampling approach is developed. The approach is completely data-driven, in the sense that no probability models are assumed and a part of a new sample is generated by resampling from the data.
Fiber waviness is the local orientation of the fiber bundles relative to the global direction of the fibers. Perfectly straight fibers have zero waviness everywhere, however, the transverse images taken from unidirectional fiber composite samples (
Fiber Distribution Modeling
By visually inspecting the distribution of fibers, which is represented by distribution of local volume fraction of fibers (panel (a) of
The first step to find such a statistic is to reduce the dimension of each sample image from approximately 450,000 pixels to a manageable size. To compress the data in a sample, we used a regression tree algorithm, i.e., a sample is represented by a tree-structured field. The locations of the splitting lines (nodes of the tree) are found by minimizing the integrated relative error (IRE), and between the separation lines the data are interpolated linearly. By setting the regression goal to IRE <5%, we normally obtain 200-300 nodes. Since the splitting lines separate the most distinct areas, the locations of them contain the information of non-stationarity: the denser the lines are, the more local curvatures the area include (panel (b) of
The distribution, in terms of probability density estimates, of the inter-line distances are generated with all the samples. The first three plots of
This observation provides a way of generating random samples with similar levels of non-stationarity, which is sampling the locations of the splitting lines from the common distribution in panel (b) of
Fiber Waviness Characterization
Characterization of fiber waviness from the transverse images is not as straightforward as the same task with fiber distribution, as the local curvature in the image, in terms of local slopes or angles, cannot be calculated directly from the pixel information (e.g., binary or grayscale values). Normally the characterization process would involve detecting each of the fibers on the image and calculate the angles accordingly. However, this approach fails in the invention because the fibers in the images are not fully shown, i.e., only partial fibers are observed, and some of the parts appear in just dots or small pieces, the orientation of which are not measurable individually (see, for example, the binary image segment in
The idea behind this method is: the relationship between the local angle and local slope is given by β=tan(α), where β is the slope and α is the angle, and the local slope is estimated by the slope of the regression line with the points on a locally binarized segment of the original image. The challenging part is to build a valid linear model for the regression: the classic simple linear model assumes the error is normally distributed and the well-known least squares method is derived based on this assumption, which is not valid in our problem. We proposed our own regression algorithm customized for this case: under the assumption that the fiber pixel point and fiber locations are uniformly distributed, assume the origin of the coordinates is put in the center of the image segment, and a regression-through-the-origin mean prediction is given by the function y=βx, the estimate of the slope β is given by:
where e is the set of residuals with the regression line y=βx, ƒ:→ (assume there are n points in total) is the mapping between the residuals and the number of modes in the probability density estimation of the residuals. With this estimation, the trend of the fibers in the transverse direction are correctly captured (see
Fiber Waviness Modeling
The challenge associated with this task is the very limited number of images: effectively only one in total (
The resulting signal has a wave shape with varying amplitudes and frequencies. A natural characterization of such signal is the periodogram. A periodogram is the estimate of the spectral density of a signal, which can be obtained by discrete Fourier transform of a time or spatial series. Under assumption (2), it can be shown that the periodogram of a series converges in distribution to a sequence of independently and exponentially distributed random variables as the length of the series increases. The sampling algorithm is then constructed by (a) obtaining the periodogram of the signal, (b) generating a random periodogram by sampling from independent exponential distributions with parameters given by (a), and (c) using inverse Fourier transform to obtain the new waviness sample from (b).
Joint Sampling of Two Uncertainty Sources
The two algorithms above, one for fiber waviness and one for fiber distribution, are capable of modeling and sampling the respective QoI individually. However, when they are integrated into a joint sampling program, spatial constraints must be taken into account. For example, if we want to generate the spatial distributions of the two QoIs for a coupon simulation model simultaneously, the pattern of the fiber distribution at different cross-sections (i.e., cross-sections taken at different locations in the fiber longitudinal direction) should be similar but different, because the fibers are curved along the longitudinal direction. Therefore, combining independent realizations of the two sampling codes will not represent the real situation. In observation of this phenomenon, we developed a joint sampling algorithm for coupon models and realistic realizations can be generated from this method.
Contrary to prior work that rely on random variables, we employ random fields to model the spatially varying uncertainty sources in multiscale materials such as cured woven composites. We introduce the Top-down sampling method that builds nested random fields and, in turn, allows us to model non-stationary variations at fine length-scales (i.e., mesoscale and microscale). We motivate the use of multi-response Gaussian processes to parsimoniously quantify the random fields and conduct sensitivity analyses for dimensionality reduction. The resulting approach is non-intrusive and can leverage statistical techniques to address the considerable computational costs of multiscale simulations.
The computational demand of multiscale materials has been circumvented by the use of Gaussian random processes trained on a space filling design. As computationally demanding simulations are ubiquitous among engineering problems, the developed user-friendly GUIs enable more engineers working on a wide variety of challenges to benefit from these powerful tools. The GUIs and the tools embedded in their source code are able to reduce effective simulation turn-around time from days to seconds, including high dimensional problems of up to 50 input variables.
Image characterization techniques are developed to quantify the variations in UD uncertainty sources (fiber waviness and spatial distribution) and the corresponding statistical models are introduced to study the variability and sample realizations of random fields or random processes from the underlying distribution. Compared to existing work that often pre-assumes some parametric model to represent the uncertainty, our methodology fully utilizes the available microstructural image data and enables the systematic study of the uncertainties of UD composites in a real-world setting.
Uncertainty quantification involves the use of a wide range of computational and statistical tools and the choice of the appropriate tool depends on the available information and resources. For example, for UQ of woven composites, in the absence of data, parametric RF models like Gaussian RFs are chosen and the associated top-down sampling approach for multiscale analysis is developed to study the impact of uncertainty; while for UD of UD composites, images are given, hence the sampling algorithm is designed to mimic the information contained in the image data so that the random reconstructed samples are realistic.
The domain size in UQ is also very important. For instance, the chosen microscale and mesoscale RVEs in UQ of woven composites were sufficiently large and so more uncertainty would have been observed in the response had we chosen smaller RVEs. Finally, we note that the underlying assumptions of any method (including ours) can be validated via experimental data. For instance, our choice of MRGPs for UQ in woven composites implies a normality assumption which might not hold in other applications where the distributions are heavily skewed or appropriate transformations are not readily available. In such cases, other random field representations can be integrated with the top-down sampling approach at the expense of more computations
In the future, we plan to gather experimental images from various structures at different spatial scales to further demonstrate the applications of our approach. Experimental data would additionally allow us to validate the normality assumption made on the marginal variations. Finally, since uncertainty is more important in nonlinear analyses of materials and structures, we plan to apply our approach to nonlinear problems such as plasticity.
For UQ of UD composites, after the variations in the uncertainty sources are quantified and corresponding sampling algorithms are developed, the next step is to conduct FEA simulations based on the generated samples and study the effect of the uncertainty sources. If they are proved to be influential to the properties of the UD composites, other multiscale models that utilize the UD properties should be modified to reflect the presence of uncertainties at the scale of UD. It will also be beneficial if some non-image-based, parametric models are developed for UQ of UD so that the level of uncertainty can be controlled by some parameters to make possible studies like sensitivity analysis and metamodeling.
5. Part-Level Molding and Model ValidationPart-scale preforming experimental validation in this section measures the prediction accuracy of the non-orthogonal prepreg material model and the multiscale preforming simulation method. It also provides guidance about the limitations of current simulation methods and possible solutions in the future. Finally, selection of various process parameters in preforming validation experiments leads to different part qualities. Observation and summary of the relation between process parameters and parts' quality can serve as an empirical rule to produce high-quality parts either for research or for real production application.
With the introduction of double curvature features, the double-dome validation experiment applied in this CIME project gives a trustworthy approach to quantitatively measure prediction accuracy of preforming models for 3D geometry parts.
This double-dome preforming experimental validation can be used for not only the models developed in the invention, but also future models for carbon fiber prepreg preforming simulation to determine their accuracy and application potential.
Double-dome benchmark geometry is used to validate the non-orthogonal material model and the multiscale preforming simulation method developed in the invention at part-scale. Double-dome geometry applied in the invention has 3D shape and complex double curvature features at the size of common automobile parts. It is an ideal benchmark to quantitatively measure prediction accuracy of the preforming models for real part production. In validation, not only the final part shape, but also the yarn angle at different locations, and forming force, are compared. This is because yarn angle, which is an indicator of fiber orientation, significantly affected mechanical stiffness and strength of CFRP parts, while forming force is an indication of membrane stress that controls tow separation and breakage. Different process parameter combinations are tested in this benchmark validation in order to ensure that the models can work at various production conditions. Comparison between the non-orthogonal model and conventional orthotropic material models is also performed to show the improvement we achieved for preforming simulation in the invention.
The major technical target for this part-scale double-dome preforming experiment is to validate the prediction capability of the non-orthogonal material model and the multiscale preforming simulation method we developed in the invention, and to check whether the fiber orientation prediction from these two simulation approaches achieves the proposed 5% error. With successful establishment of this benchmark test and corresponding quantitative measure and validation criteria, we also aim to provide a widely accepted preforming simulation validation method for both academic and industrial researchers.
Preforming is a temperature varying process because of the utilization of hot prepreg sheets and cold/warm tools in the process. In the double-dome benchmark preforming validation experiments performed for the invention, supplied prepregs were first heated in an oven to around 70° C. and then placed in a press for preforming. The geometry of the double-dome punch and the binder are demonstrated in
Preforming simulation models, utilizing either experiment-based non-orthogonal material model or multiscale method, were established in LS-DYNA® using the dynamic explicit integration method. The simulation setup is illustrated
For the first set of validation, only one layer of prepreg with ±45° initial fiber orientation was preformed. The results from the experiment, simulation with experiment-based non-orthogonal material model, and simulation with conventional orthotropic material model, are compared. In simulation models, material properties were calibrated using uniaxial tension, bias extension and bending tests at 23° C., while the initial angle between the yarn direction and the global coordinates was defined as a material input property to identify the fiber layup. The simulation results in the upper-right quarter of
In the non-orthogonal model, yarn angle is defined as an output variable, while MAT_002 does not have the capability for direct visualization. For clarity, Table 9-18 compares the resulting shear angles at various locations obtained from the experiment and simulations. Again, it shows that the current model has improved the prediction accuracy. The fiber orientation (yarn angle) prediction errors at only half of the locations reach the proposed 5%, which is unsatisfactory.
The reason for the unsatisfactory fiber orientation prediction accuracy is that this non-orthogonal material model only utilizes experimental data from uniaxial tension (pure tension) and bias-extension (pure shear) tests. The coupling between tension and shear is neglected. For real prepregs, an increase of tension along the yarns will increase the contact force and friction force between warp and weft yarns, resulting in a shear resistance increase. This kind of mechanism is not simulated by the experiment-based non-orthogonal model. It is, however, captured by the multiscale preforming simulation method, where virtual material characterization is performed by experimentally calibrated mesoscopic RVE models that can be deformed to arbitrary strain. To demonstrate the improvement from multiscale modeling, its simulation result is compared against the one obtained from the previous non-orthogonal material model. Final prepreg geometry and yarn angle distribution results are demonstrated in panel (a) of
The punch force-displacement curves from the two simulation cases and the experiment are compared in panel (b) of
After single layer preforming validation with a ±45° initial fiber orientation and 6 yarn angle measuring locations, further validation with different measuring locations, different initial fiber orientations, and different numbers of prepreg layers (0°/90° one layer, ±45° one layer, and 0°/90°/±45° two layers) were performed. Because of project time limitations, only the simulation results from the experiment-based non-orthogonal model are compared with the experimental ones. The multiscale simulation method was not applied to these configurations.
Simulation and experiment results with different initial fiber orientations are depicted in
For warp and weft yarn angle distribution, results are shown in
The double-dome preforming benchmark test established can introduce complex double curvature features at the size of common automobile parts. Combined with the corresponding quantitative measurement of local prepreg temperature history, draw-in distance, local yarn angle, and forming force also developed in the invention, it serves as an effective experimental approach to validate the preforming simulation methods developed in the invention. Validation results indicate that the developed models can reach the proposed fiber orientation prediction error of less than 5% most of the time, guaranteeing the models' application potential.
Besides for calibrating the models in the invention, this double-dome benchmark test can also serve as validation for preforming simulation models developed in the future or from other researchers due to the fact that it considers most of the process parameters and provides the most important criteria for the final parts' performance. As a result, this approach enables researchers in both academic and industrial fields to test their preforming models in a reliable way, so it motivates the invention of accurate preforming simulation models that can help increase production and broaden application of advanced CFRPs, while benefiting environmental emission and fossil fuel control.
Moreover, the double-dome preforming tests performed in the invention provide important information about the production of high-quality parts. Temperature control in not only the heated oven but also with forming tools is essential for the resin to fully melt and cause small prepreg deformation resistance and small prepreg surface interaction, which are the keys to smooth and defect-free final parts.
Despite the fact that this exact setup of the double-dome preforming validation experiment is difficult to be commercialized directly, it facilitates commercialization of the preforming models developed in the invention for validation of these models' prediction accuracy. The quantitative measurement approaches for local temperature, draw-in distance, local yarn angle, and forming force can be transferred to other research teams in an open source form to establish a widely-accepted preforming simulation validation standard, which can accelerate development of other high-accuracy preforming models. Temperature control and monitor experience gained from this preforming experiment can be implemented into preforming presses for real production in the form of heated coolant and embedded thermocouples, to produce smooth and defect-free CFRP parts. In the part-level preforming model validation according to the invention, we established a double-dome benchmark test and corresponding quantitative measurement approaches and criteria. The benchmark tests validate the prediction capabilities of the preforming models we developed. Moreover, it provides a trustworthy approach to test models from other researchers, and gives insight guidance for the design of preforming facilities.
For experiments with multiple parameters that need to be considered, like this preforming benchmark test, it is essential to properly perform the design of the experiment to clearly study the effects from parameters, while keeping material and time consumption low. For preforming using prepregs, it is important to control not only initial prepreg temperature, but also tool temperature to ensure the resin keeps melting during the whole preforming processes for smooth and defect-free parts production.
The foregoing description of the exemplary embodiments of the present invention has been presented only for the purposes of illustration and description and is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching.
The embodiments were chosen and described in order to explain the principles of the invention and their practical application so as to activate others skilled in the art to utilize the invention and various embodiments and with various modifications as are suited to the particular use contemplated. Alternative embodiments will become apparent to those skilled in the art to which the present invention pertains without departing from its spirit and scope. Accordingly, the scope of the present invention is defined by the appended claims rather than the foregoing description and the exemplary embodiments described therein.
Claims
1. A method for design optimization and/or performance prediction of a material system, comprising:
- generating a representation of the material system at a number of scales, wherein the representation at a scale comprises microstructure volume elements (MVE) that are of building blocks of the material system at said scale;
- collecting data of response fields of the MVE computed from a material model of the material system over predefined sets of material properties/constituents and boundary conditions;
- applying machine learning to the collected data of response fields to generate clusters that minimize a distance between points in a nominal response space within each cluster;
- computing an interaction tensor of interactions of each cluster with each of the other clusters;
- manipulanting the governing partial differential equation (PDE) using Green's function to form a generalized Lippmann-Schwinger integral equation; and
- solving the integral equation using the generated clusters and the computed interactions to result in a response prediction that is usable for the design optimization and/or performance prediction of the material system.
2. The method of claim 1, further comprising passing the resulted response prediction to a next coarser scale as an overall response of that building block, and iterating the process until a final scale is reached.
3. The method of claim 1, wherein the building blocks are defined by material properties and structural descriptors obtained by modeling or experimental observations and encoded in a domain decomposition of structures for identifying locations and properties of each phase within the building blocks.
4. The method of claim 3, wherein the structural descriptors comprise characteristic length and geometry.
5. The method of claim 1, wherein the boundary conditions are chosen to satisfy the Hill-Mandel condition.
6. The method of claim 1, wherein the collected data of response fields comprise a strain concentration tensor, a deformation concentration tensor, stress tensor including PK-I stress and/or Cauchy stress tensors, plastic strain tensor, thermal gradient, or the like.
7. The method of claim 1, wherein the machine learning comprises unsupervised machine learning and/or supervised machine learning.
8. The method of claim 1, wherein the machine learning is performed with a self-organizing mapping (SOM) method, a k-means clustering method, or the like.
9. The method of claim 1, wherein the clusters are generated by marking all material points that have the same response field within the representation of the material system with a unique ID and grouping material points with the same ID.
10. The method of claim 9, wherein the generated clusters is a reduced representation of the material system, which reduces the number of degrees of freedom required to represent the material system.
11. The method of claim 10, wherein the generated clusters are a reduced order MVE of the material system.
12. The method of claim 1, wherein the computed interaction tensor is for all pairs of the clusters.
13. The method of claim 1, wherein said computing the interaction tensor is performed with fast Fourier transform (FFT), numerical integration, or finite element method (FEM).
14. The method of claim 1, wherein the PDE is reformulated as a Lippmann-Schwinger (LS) equation.
15. The method of claim 14, wherein said solving the PDE with the LS equation is performed with arbitrary boundary conditions and material properties.
16. The method of claim 1, wherein the collected data of response fields, the generated clusters, and/or the computed interaction tensor are saved in one or more material system databases.
17. The method of claim 16, wherein said solving the PDE with the LS equation is performed in real-time by accessing the one or more material system databases for the generated clusters and the computed interaction tensors.
18. A method for design optimization and/or performance prediction of a material system, comprising:
- performing an offline data compression, wherein original microstructure volume elements (MVE) of building blocks of the material system are compressed into clusters, and an interaction tensor of interactions of each cluster with each of the other clusters is computed; and
- manipulanting the governing partial differential equation (PDE) using Green's function to form a generalized Lippmann-Schwinger integral equation; and
- solving the integral equation using the generated clusters and the computed interactions to result in a response prediction that is usable for the design optimization and/or performance prediction of the material system.
19. The method of claim 18, further comprising passing the resulting response prediction to a next coarser scale as an overall response of that building block, and iterating the process until a final scale is reached.
20. The method of claim 18, wherein the building blocks are defined by material properties and structural descriptors obtained by modeling or experimental observations and encoded in a domain decomposition of structures for identifying locations and properties of each phase within the building blocks.
21. The method of claim 20, wherein the structural descriptors comprise characteristic length and geometry.
22. The method of claim 18, wherein the boundary conditions are chosen to satisfy the Hill-Mandel condition.
23. The method of claim 18, wherein said performing the offline data compression comprises:
- collecting data of response fields of the MVE computed from a material model of the material system over a predefined set of material properties and boundary conditions;
- applying machine learning to the collected data of response fields to generate clusters that minimize a distance between points in a nominal response space within each cluster; and
- computing the interaction tensor is for all pairs of the clusters.
24. The method of claim 23, wherein the collected data of response fields comprise a strain concentration tensor, a deformation concentration tensor, stress tensor including PK-I stress and/or Cauchy stress tensors, plastic strain tensor, thermal gradient, or the like.
25. The method of claim 23, wherein the machine learning comprises unsupervised machine learning and/or supervised machine learning.
26. The method of claim 23, wherein the machine learning is performed with a self-organizing mapping (SOM) method, a k-means clustering method, or the like.
27. The method of claim 23, wherein the clusters are generated by marking all material points having the same response field within the representation of the material system with a unique ID and grouping material points with the same ID.
28. The method of claim 27, wherein the clusters is a reduced representation of the material system, which reduces the number of degrees of freedom required to represent the material system.
29. The method of claim 28, wherein the clusters are adapted as a reduced order MVE of the material system.
30. The method of claim 23, wherein said computing the interaction tensor is performed with fast Fourier transform (FFT), numerical integration, or finite element method (FEM).
31. The method of claim 23, wherein the PDE is reformulated as a Lippmann-Schwinger (LS) equation.
32. The method of claim 31, wherein said solving the PDE with the LS equation is performed with arbitrary boundary conditions and material properties.
33. The method of claim 23, wherein the collected data of response fields, the generated clusters, and/or the computed interaction tensor are saved in one or more material system databases.
34. The method of claim 33, wherein said solving the PDE with the LS equation is performed with online accessing the one or more material system databases for the generated clusters and the computed interactions.
35. A non-transitory tangible computer-readable medium storing instructions which, when executed by one or more processors, cause a system to perform a method for design optimization and/or performance prediction of a material system, wherein the method is in accordance with claim 1.
36. A computational system for design optimization and/or performance prediction of a material system, comprising
- one or more computing devices comprising one or more processors; and
- a non-transitory tangible computer-readable medium storing instructions which, when executed by the one or more processors, cause the one or more computing devices to perform a method for design optimization and/or performance prediction of a material system, wherein the method is in accordance with claim 1.
37. A material system database usable for conducting efficient and accurate multiscale modeling of a material system, comprising:
- clusters for a plurality of material systems, each of which groups all material points having a same response field within a region within a microstructural volume element (MVE) of a respective material system with a unique ID;
- interaction tensors, each of which represents interactions of all pairs of the clusters (regions with unique ID) for the respective material system; and
- response predictions computed based on the clusters and the interaction tensors.
38. The material system database of claim 37, wherein the clusters are generated by applying machine learning to data of response fields of the MVE computed from a material model of the respective material system over a predefined set of material properties and boundary conditions.
39. The material system database of claim 38, wherein the interaction tensors are computed with fast Fourier transform (FFT), numerical integration, or finite element method (FEM).
40. The material system database of claim 39, wherein the responses predictions are obtained by solving a governing partial differential equation (PDE) with the LS equation using the clusters and the computed interactions.
41. The material system database of claim 40, wherein the responses predictions comprise at least stiffness, stress responses, damage initiation, fatigue indicating parameter (FIP), and/or thermal expansion.
42. The material system database of claim 37, being configured such that some of the response predictions are assigned as a training set for training a different machine learning that connects processes/structures to responses/properties of the material system directly without going through the clustering and interaction computing processes at all; and some or all of the remaining response predictions are assigned as a validation set for validating the efficiency and accuracy of the multiscale modeling of the material system.
43. The material system database of claim 37, being generated with predictive reduced order models.
44. The material system database of claim 42, wherein the predictive reduced order models comprise a self-consistent clustering analysis (SCA) model, a virtual clustering analysis (VCA) model, and/or an FEM clustering analysis (FCA) model.
45. The material system database of claim 37, being updatable, editable, accessible, and searchable.
46. A method of applying the material system database of claim 37 for design optimization and/or performance prediction of a material system, comprising:
- training a neural network with data of the material system database; and
- predicting real-time responses of the material system during a loading process performed using the trained neueral network, wherein the real-time responses are used for the design optimization and/or performance prediction of a material system.
47. The method of claim 46, further comprising performing a topology optimization to design a structure with microstructure information.
48. The method of claim 46, wherein the neural network comprises a feed forward neural network (FFNN) and/or a convolutional neural network (CNN).
49. A non-transitory tangible computer-readable medium storing instructions which, when executed by one or more processors, cause a system to perform a method for design optimization and/or performance prediction of a material system, wherein the method is in accordance with claim 46.
Type: Application
Filed: Sep 16, 2019
Publication Date: Nov 18, 2021
Inventors: Wing Kam LIU (Oak Brook, IL), Jiaying GAO (Evanston, IL), Cheng YU (Evanston, IL), Orion L. KAFKA (Enosburg, VT)
Application Number: 17/273,438