1、AN AMERICAN NATIONAL STANDARDGuide for Verification and Validation in Computational Solid MechanicsASME Vhowever,theyshouldnot contain proprietary names or information.Requests that are not in this format will be rewritten in this format by the Committee priorto being answered, which may inadvertent
2、ly change the intent of the original request.ASME procedures provide for reconsideration of any interpretation when or if additionalinformation that might affect an interpretation is available. Further, persons aggrieved by aninterpretation may appeal to the cognizant ASME Committee or Subcommittee.
3、 ASME does not“approve,” “certify,” “rate,” or “endorse” any item, construction, proprietary device, or activity.Attending Committee Meetings. The PTC 60 Committee regularly holds meetings, which areopen to the public. Persons wishing to attend any meeting should contact the Secretary of thePTC 60 C
4、ommittee.viiPREFACEThis document provides general guidance for implementing verification and validation(Vadetaileddescriptionofthefullphysical system, including the behavior of the systemsparts both in isolation and in combination; and a list ofthe experiments that need to be performed. The planmay
5、also provide details about the approach that willbe taken to verify the model, as well as informationrelated to such program factors as schedule and cost.Key considerations in developing the V asubassembly,inturn,consistsofindividualcomponents.The top-level reality ofinterest inFig. 2can be viewedas
6、 any level of a real physical system. For example, itcould be a complete automobile, or it could be the driveASME V the products of theseactivities are highlighted in rounded boxes (e.g., themathematical model is the product of the mathematicalmodeling activity). Modelers follow the left branch tode
7、velop, exercise, and evaluate the model. Experiment-ers follow the right branch to obtain the relevant experi-mental data via physical testing. Modelers andexperimenters collaborate in developing the conceptualmodel, conducting preliminary calculations for thedesignofexperiments,andspecifyinginitial
8、andbound-ary conditions for calculations for validation.Theprocess showninFig. 4 isrepeated foreachmem-berofeverytierofthehierarchyinthesystemdecompo-sition exercise discussed previously, starting at the5component level and progressing upward through thesystemlevel. Thus,thereality ofinterestis anin
9、dividualsubsystem each time this approach is followed. Ulti-mately, the reality of interest at the top of Fig. 4 wouldbe the complete system. However, in the bottom-upapproach, both preliminary conceptual model develop-ment and V therefore, all assumptions should beASME Vtheestimatedcontributionscan
10、thenbeusedtoestablishcommensurate accuracy requirements. It is reasonableto expect that the accuracy requirement for componentbehavior will be more stringent than the accuracyrequirementsforthecompletesystemduetothesimplernature of problems at the component level and the com-pounding effect of propa
11、gating inaccuracy up throughthe hierarchy. For example, a 10% accuracy requirementmightbeestablishedforamodelthatcalculatestheaxialbuckling strength of a tubular steel strut in order toachieve 20% accuracy of the collapse strength of a framemade of many such components.2.7 Documentation of V the cor
12、responding“intended use” of the model is to predict system behav-ior for cases that cannot, or will not, be tested.Figure 5 illustrates the path from a conceptual modelto a computational model. An example of a conceptualmodel is a classical BernoulliEuler beam with theassumptions ofelastic response
13、and plane sections. Thisconceptual model can be described with differential cal-culus to produce a mathematical model. The equationscanbesolvedbyvariousnumericalalgorithms,buttypi-cally in CSM they would be solved using the finite ele-ment method. The numerical algorithm is programmedinto a software
14、 package, here called a “code.” With thespecification of physical and discretization parameters,the computational model is created.3.1 Conceptual ModelThe conceptual model is defined as the idealized rep-resentationofthesolidmechanicsbehavioroftherealityof interest. This model should therefore inclu
15、de thosemechanisms that impact the key mechanical and physi-cal processes that will be of interest for the intended useof the model. The activity of conceptual model develop-ment involves formulating a mechanics-based represen-tation of the reality of interest that is amenable tomathematical and com
16、putational modeling, thatincludes the appropriate level of detail, and that isexpected to produce results with adequate accuracy forASME V inappropriate formfor representation of material behavior; assumptionsabout contacting surfaces being tied when in reality agap develops between the parts; assum
17、ptions that twopartsdonotmoverelativetooneanotherwheninrealitythey do, resulting in development of significant frictionforces; assumed rigid boundary conditions thatturn outto have significant compliance, etc. It is important tolook for possible violation of the assumptions of theform of the mathema
18、tical model when reconciling themeasured data with the results of the computationalsimulation.Aswithparametercalibration,anyrevisions11tothemodelafterV examples include temporaland spatial discretization error, iterative error, andround-off error. Calculation verification is also referredto as numer
19、ical error estimation. References 13 and 14discuss the differences and emphases of code verifica-tion and calculation verification.Mathematically rigorous verification of a code wouldrequire proof that the algorithms implemented in thecode correctly approximate the underlying PDEs andthe stated init
20、ial conditions and boundary conditions.In addition, it would also have to be proven that thealgorithms converge to the correct solutions of theseequations in all circumstances under which the codewill be applied. Such proofs are currently not availablefor general-purpose computational physics softwa
21、re.Executing the elements of code verification and calcula-tion verification that are identified as necessary in thisdocument is critical for Vtherefore, a hierarchy of confidence should be recog-nized. Similar to the AIAA Guide 2, the followingorganization of confidence (from highest to lowest) for
22、the testing of algorithms is advocated:(a) exact analytical solutions (including manufac-tured solutions)(b) semianalytic solutions reduction to numericalintegration of ordinary differential equations (ODEs),etc.(c) highly accurate numerical solutions to PDEsThe second point is that some test proble
23、ms are moreappropriate than others, so application-relevant testproblems should be used. These test problems could beones with which users have a great deal of experience,or they could be ones that are constructed to addressspecific needs that arise when planning the verificationactivities.Paragraph
24、s 4.1.1.1 through 4.1.1.4 provide additionalinformation on the kinds of tests and techniquesemployed in numerical code verification.4.1.1.1 Analytical Solutions. Two categories ofanalytical solutions are of interest in code verification.First, there are those that correspond to plausible ifoften gre
25、atly simplified real-world physics. Second,therearemanufacturedsolutions,whicharedefinedanddiscussed in para. 4.1.1.2.“Physically plausible” analytical solutions are solu-tions to the mathematical models PDEs, with initialconditions and boundary conditions that can realisti-cally be imposed, such as
26、 uniform pressure on a simplysupported elastic plate. These solutions are sometimesexact (requiring only arithmetic evaluations of explicitmathematical expressions), but are often semianalytic(represented by infinite series, complex integrals, orasymptoticexpansions).Difficultiescanariseincomput-ing
27、 any of these semianalytic solutions, especially infi-nite series. The analyst must ensure that when used forcode verification, numerical error has been reduced toan acceptable level.Typically forproblems that allow analyticalsolutions,whether exact or semianalytic, pass/fail criteria can bestatedin
28、termsofthefollowingtwotypesofcomparison:(a) theagreementbetweentheobservedorderofaccu-racy and the formal order of accuracy of the numericalmethod(b) the agreement of the converged numerical solu-tion with the analytical solution using specified normsWhen computational solutions are compared withana
29、lytic solutions, either the comparisons should beASME Vthisdistinctionshould be considered when assessing the accuracy ofan algorithm. Consistency tests can also be made thatinvolvegeometry(e.g.,checkingthatthesamenumericalsolution is obtained in different coordinate systems ordetermining whether sp
30、ecific symmetry features arepreserved in the solution). Consistency tests should beconsidered complementary to the other types of algo-rithm tests described herein for numerical algorithmverification. If they can be devised, consistency tests areespecially important because the failure of these test
31、sindicates that there are unacceptable errors in the code.4.1.2 Software Quality Engineering (SQE). The SQEpart of code verification refers to procedures used toprovide evidence that the software implementation ofthe numerical algorithms is free of programming errorsand implementationfaults. Most co
32、mmonly, such errorsreside in the source code, but occasionally flaws in thecompilerintroducethem.Evidenceoferror-freesoftwarefrom SQE is a necessary element of verification. SQEdetermines whether the software system is reliable andproduces reliable results on specified computer hard-ware with a spec
33、ified software environment (compilers,libraries). To optimize its influence on code verification,SQE should be planned and used during the develop-ment of the software product, not as a retrospectiveactivity for a fielded software implementation 19.However, feedback from users to developers is highl
34、yencouraged.4.2 Calculation VerificationCalculation verification is applied to a computationalmodel that is intended to predict validation results.ASME V however, they provide ordered error esti-mates for specific field quantities of interest (i.e., theestimate improves with mesh refinement).Theseco
35、ndclassoffinite-elementbasederrorestima-tors consists of residual-based methods. Like recoverymethods, residual methods were originally formulatedto provide error estimates in the global energy norm.Extension to error estimates in quantities of interest,such as deflections or stresses, generally req
36、uires addi-tional solutions 24.Single-mesh finite-elementbased error estimates,when applicable, offer a great advantage by reducingmesh-generation and computational effort. However,the estimates require that the convergence rate beassumed. Calculation of an observed convergence ratealways requires t
37、he generation of multiple meshes. Thesingle-mesh a posteriori methods are also important forfinite element adaptivity, where both the spatial meshdensity (known as h-adaptivity) and the order of thefinite element scheme (known as p-adaptivity) can beadapted 22, 23.15Standard Richardson extrapolation
38、 assumes that(a) the observed order of accuracy (rate of conver-gence) is known(b) two numerical solutions at different mesh resolu-tions have been computed(c) both solutions are in the asymptotic convergenceregimeTo estimate a bound on the numerical error, themethod then extrapolates to a more accu
39、rate valueagainst which to compare the original solution. Variouselaborations of Richardson extrapolation use three ormore meshes to calculate an observed order of accuracy13. The observed order of accuracy can be used toverify a theoretical order of accuracy, test whether thesolution is in the asym
40、ptotic regime, and estimate azero-mesh-size converged solution using extrapolation.A grid convergence index (GCI) based on Richardsonextrapolation has been developed and advocated toassist in estimating bounds on the mesh convergenceerror 13, 25. The GCI can convert error estimates thatare obtained
41、from any mesh-refinement ratio into anequivalent mesh-doubling estimate. More generally, theGCI produces an error-bound estimate through anempiricallybasedfactorofsafetyappliedtotheRichard-son error estimate 13.4.2.2 Potential Limitations. The assumption ofsmoothnessinsolutions(i.e.,theabsenceofsing
42、ularitiesand discontinuities), underlies much of the theory ofexisting error estimation techniques and is quitedemanding in estimating local errors in the solutiondomain; however, this assumption does not prevent theuse of an empirical approach to error estimation basedon observed convergence rates.
43、 Experience shows thatan empirical approach is more dependable when morethan three meshes are used with a least squares evalua-tionofobservedconvergenceratesandwhenfunctionalsrather than point values are considered.Singularities and discontinuities commonly occur insolid mechanics; the crack tip sin
44、gularity is an example.The difficulties of singularities and discontinuities arecompoundedinverycomplexconceptualmodels,wheremultiple space and time scales may be important andverystrongnonlinearitiesmaybepresent. Ideally,calcu-lation verification should be able to confront these com-plexities. Howe
45、ver, the “pollution” of particular regionsof a calculation by the presence of singularities such asshock waves, geometrical singularities, or crack propa-gation is a subject of concern in error estimation 13, 23,26, and there is a lack of rigorous theory for guidancein these situations.Another compl
46、exity in numerical error estimation isthe coupling that can occur between numerical errorand the spatial and temporal scales in certain types ofphysicalmodels.Refiningthe meshdoes notensurethatthephysicsmodeledwillremainunchangedasthemeshis resolved. For example, an insufficiently refined meshASME V
47、 and uncertainties in the mea-surements should be reported.5.1.1 Experiment Design. Generally, data from theliteraturearefromexperimentsperformedforotherpur-poses and thus do not meet the requirement of a valida-tion experiment. Experiments can have many purposesand are often focused on assessing co
48、mponent perform-ance relative to safety criteria or on exploring modes ofsystem response. Consequently, the measurement set inmany experiments may differ from the measurementsneeded for model validation. For example, a test mayshow that a component fails at a load higher than anacceptablethreshold a
49、ndtherebyestablishthat thecom-ponent is acceptable for use. However, the test may nothavemeasuredthedeformationastheforcewasappliedbecause that measurement was not needed for the pur-pose of the experiment. If both the component-failuremeasurement and the deformation measurement werenecessary to validate a computational model, the testmeasuring only component failure could not be usedfor validation. Furthermore, it is essentially impossibleto make blind predictions of experiments whose resultsare known prior to the validation effort because theresultsguide,ifevensubc