SAE J 2940-2011 Use of Model Verification and Validation in Product Reliability and Confidence Assessments《使用型号验证 产品可靠性确认和置信度评估》.pdf

上传人:lawfemale396 文档编号:1027714 上传时间:2019-03-24 格式:PDF 页数:31 大小:401.25KB
下载 相关 举报
SAE J 2940-2011 Use of Model Verification and Validation in Product Reliability and Confidence Assessments《使用型号验证 产品可靠性确认和置信度评估》.pdf_第1页
第1页 / 共31页
SAE J 2940-2011 Use of Model Verification and Validation in Product Reliability and Confidence Assessments《使用型号验证 产品可靠性确认和置信度评估》.pdf_第2页
第2页 / 共31页
SAE J 2940-2011 Use of Model Verification and Validation in Product Reliability and Confidence Assessments《使用型号验证 产品可靠性确认和置信度评估》.pdf_第3页
第3页 / 共31页
SAE J 2940-2011 Use of Model Verification and Validation in Product Reliability and Confidence Assessments《使用型号验证 产品可靠性确认和置信度评估》.pdf_第4页
第4页 / 共31页
SAE J 2940-2011 Use of Model Verification and Validation in Product Reliability and Confidence Assessments《使用型号验证 产品可靠性确认和置信度评估》.pdf_第5页
第5页 / 共31页
点击查看更多>>
资源描述

1、_ SAE Technical Standards Board Rules provide that: “This report is published by SAE to advance the state of technical and engineering sciences. The use of this report is entirely voluntary, and its applicability and suitability for any particular use, including any patent infringement arising there

2、from, is the sole responsibility of the user.” SAE reviews each technical report at least every five years at which time it may be revised, reaffirmed, stabilized, or cancelled. SAE invites your written comments and suggestions. Copyright 2011 SAE International All rights reserved. No part of this p

3、ublication may be reproduced, stored in a retrieval system or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of SAE. TO PLACE A DOCUMENT ORDER: Tel: 877-606-7323 (inside USA and Canada) Tel: +1 724-776-497

4、0 (outside USA) Fax: 724-776-0790 Email: CustomerServicesae.org SAE WEB ADDRESS: http:/www.sae.org SAE values your input. To provide feedback on this Technical Report, please visit http:/www.sae.org/technical/standards/J2940_201111SURFACE VEHICLE STANDARD J2940 NOV2011 Issued 2011-11 Use of Model Ve

5、rification and Validation in Product Reliability and Confidence Assessments RATIONALE SAE has numerous standards relating to the use of models65-67,70,71, and product reliability60-69. Other professional organizations (AIAA1, ASME5,6, DoD50, NASA58, etc) have recent standards for Model Verification

6、however it will suffice in order to make our points in linking V even our models assessment only claims that the true reliability is either above or below our point estimate. SAE J2940 Issued NOV2011 Page 12 of 31 One of the Panel Discussion participants and later a reader of the draft of this narra

7、tive offered the following text that may be helpful in distinguishing Reliability, where we can assess that we approach C100% Confidence, to the more common case where we know we do not have C=100% Confidence, but we may not know a way, let alone a unique way, to quantify the confidence that we do h

8、ave: “Since reliability is a probability, the frequentistic (number of occurrences of an event in n independent experiments divided by n as n tends to infinity) and subjective interpretations of the probability of an event (a decision makers highest buying price of a lottery ticket that pays $1 if t

9、he event occurs and zero otherwise) may be helpful to the reader. Many reliability studies rely on the subjective interpretation to construct models of uncertainty.” In complex NDA, we will probably never escape subjectivity. However, we can hope that subjectivity becomes synonymous with expert judg

10、ment. Our goal in this portion of the narrative is to construct a simple quantitative roadmap from Model V we do not know the proper value of the small sample correction factor Xc, we can only start with a simple assumption and follow up with more elaborate analyses. One simple assumption is to let

11、the correction factor Xc= -1, the inverse square root of the reduced Chi-Square term9. This gives Tat an assessed level of Confidence C: T|C= XcuT. 9 The Margin M is also assessed at this level of Confidence C: M|C= S|C L = S L CI 10a FOS|C= S|C / L = (S CI) / L 10b In the simplest of statistical as

12、sessments, we may obtain an estimate of the CI as21,28,29: 11 Load L Mean Strength S Area = R Tail Area = Pfail = 1 -R Probability Bolt Force M = S L-CI C2=92% Estimate of Mean Strength S SAE J2940 Issued NOV2011 Page 14 of 31 In this case N=the number of experimental tests vs. simulations that are

13、compared at the nominal values of input parameters (and therefore the nominal output) of the “Set Point”6for Validation. (For the time being, we will only discuss the linkage of the Model and V linking a validated model to a reliability assessment. So now we have |C= (S L CI) / (XcuT) 12 R|C= (|C) 1

14、3 Eqn. 12 makes a linear assumption that both a detrimental value of scatter (the population standard deviation in the denominator) and a detrimental value of the mean will coincide. If these two effects are encountered independently, our formulation would look like: 14 Eqn. 14 is not as easy to vis

15、ualize in Fig. 3 as the linear subtraction used in Eqn. 12. The linear subtraction of CI is more conservative, while the RMS version in Eqn. 14 reduces, in the simplest case, to the classic Prediction Interval, PI, if we set R=C and assume small sample corrections as above: 15 It has taken us 15 equ

16、ations to describe the simplest of relations between Model V but they provide an example to see the terms we must assess, the path to assess them, and to illustrate one definitive and complete path from Model V this method is a variant of the Least Squares Solution Verification method described in A

17、SME V A SIMPLIFIED DEPICTION In general, there is a limit to how much we can defend, via evidence, about the form of the distribution of the terms D, N, P, M. There is even less defense for our quantification of Mif we want to extrapolate out of the domain of our experimental data, or even away from

18、 a given experimental point within the domain. The more sparse our data, the less we can say about the form of the distributions, even if we have decent estimates of the values for D, N, P. As a result, it will be very hard for us to estimate very high reliability or very low probability of failure,

19、 because in doing so we are making an assumption of the distribution form of these uncertainty terms. We could assume a uniform distribution, with half-width 1.732, and assert that our model (and real experimental) results will then never fall outside that bound. Our evidence for this assertion woul

20、d be only a judgment call. We could equally assume a normal distribution, with an infinite “tail”, so that we would always predict a non-zero probability of failure. We can quantify estimates of reliability and confidence at these extremes of high reliability or low probability of failure, but it is

21、 rare that we can avoid subjectivity. Sometimes our choices of assumed distributions (usually leading to a different assessed M) will not affect our final design decision, and at other times we must realize that even though our validation and quantification of reliability and confidence might have a

22、 good basis inside the domain of our experimental data, we just lack the information to make credible assessments outside the domain of our experimental data. A.3.4 System Level: Hierarchical or Integral? As depicted in Figure 7, both the ASME V values for the CI and PI are given above in Eqn. 11 an

23、d 15 at the mean, and can even be generalized to approximate extrapolations away from the mean of the data21,28,29as: 18 19 In Eqn. 18-19, xidenote the individual input values for each of the N experimental points, ideally equally spaced along the x-axis, and xmis the mean of those input values. If

24、we choose an input condition xj, and conduct additional experiments at xj, we expect the %C provided by the coverage factor k (e.g. 95% for k=1.96 and normal distribution) of these new experimental points to fall inside the PI, and 5% to fall outside. FIGURE 8 - PROCEDURE TO DEMONSTRATE AND ASSESS P

25、REDICTIVE CAPABILITY, A TEST OF BOTH THE MODEL AND THE V&V PROCESS ITSELF. THE RESULT IS A “DATA-CENTRIC” FORM OF PEER REVIEW: THE NEW DATA SERVE AS “PEERS” IN THE REVIEW SAE 2007 RWL 12 MODEL Assessed Reliability is not PRODUCT Reliability UNLESS: We make sure that your model, near the end of the V

26、&V process, can generate the equivalent of Confidence and Prediction Intervals. If it cannot, how can we have any idea what the model is telling us? Test these Prediction Intervals against “new” data: do the right proportions fall inside and outside?Mean Model OutputPrediction Interval PIPredictedOu

27、tputModel InputsConfidence Interval CIPredictedOutputPredictedOutputSAE J2940 Issued NOV2011 Page 24 of 31 It is often a good start to consider a model as linearized about the validation set point of most interest, and to use a process nearly as simple as the one shown and described with Eqn. 18 and

28、 19. These equations for extrapolating the CI and PI within or beyond the referent data used to validate the model contain even more assumptions than the CI and PI equations given just above. They are definitely simplistic, but they can be far more informative, even graphically, than just a line wit

29、h no extrapolation at all. Eqns. 18 and 19 suggest the use of the Students t correction tcfor the mean, and the reduced Chi-Square correction -1 for the population. Although strictly speaking these correction factors are only appropriate for normal distributions, we find, by comparison to Monte Carl

30、o, that they are good approximations for uniform distribution small sample corrections as well. The extrapolation term in Eqn. 18 and 19 is outright dangerous, as noted in most statistical texts. This is because we have no evidence that the assumptions appropriate to these equations hold beyond the

31、referent data, or if there is a step change in the physics themselves just beyond the referent data. But with proper caveats, even this simple formula is far better than an extrapolated line with no CI or PI at all. Therefore, plotting out the CI and PI from Eqn. 18 and 19 is still an efficient and

32、informative place to start. Then, think about how many assumptions might have been violated in doing so, and compare the resulting simple CI and PI to the CI and PI generated from a more complex V&V process. Whether we assume a normal distribution, a uniform distribution, or use Bayesian inference o

33、r Expert Elicitation to get our distributions, unless our V&V process can generate a CI and PI, e.g. a “95% Prediction Interval”, or a “77% Prediction Interval” (or any percent you choose), we do not have a quantitative V&V process. We hinted above that Eqn. 14 shows why this is necessary: A PI is j

34、ust a special case of Reliability at Confidence where RC and the uncertainty in the population and the mean are assumed to be manifest independently of each other. If our V&V process does not result in the ability to generate a PI, we will not be able to use it to generate R at C either. The final r

35、eason that it is not only essential that our V&V process contain the ability to generate a PI, but that we actually do so, is that this is perhaps the best way to validate the Predictive Capability of our model. As depicted in Figure 8, once we have generated a CI and PI, the next step is to compare

36、 “new” experimental data, or at the very least data that was not used or even considered during the V&V process, to the PI that resulted from our V&V process. “Success” does not mean that all the new data falls inside the PI. That would mean our PI is overly conservative, and we will end up wasting

37、money on this portion of the design, and as a result weakening another part of the design due to cost tradeoffs. “Success” means that the correct proportions of the new, independent data fall inside and outside the PI. Of course, in addition to the common question of “what is new data”, there is als

38、o the essential consideration of “what is new independent data?” A.4 NOMENCLATURE AIAA: American Institute of Aeronautics and Astronautics ANSI: American National Standards Institute ASME: American Society of Mechanical Engineers a: Left-hand edge of a uniform distribution (e.g. Fig. 2) b: Left-hand

39、 edge of a uniform distribution (e.g. Fig. 2) b: Intercept in the linear regression y=mx+b Bu: “Reliability Index” for the uniform distribution (see also ) C: Confidence, 0%C100% C1: One-sided confidence C2: Two-sided confidence CI: Confidence Interval D: Experimental Data value of QOI E: E=S-D, dif

40、ference between Simulation and Data FMEA: Failure Modes and Effects Analysis FOS: Factor Of Safety (central or nominal) k: Coverage factor for desired confidence, e.g. k=1.96 for 95% C2with normal distribution K: Number of free or fitting or tuning parameters L: Load (in the example for Fig. 1-2) LR

41、FD: Load Resistance Factor Design M: Margin (central) m: Slope in the linear regression y=mx+b N: Number of “tests” either experimental (uDor uM), grid (uN), or parametric (uP) NDA: Non-Deterministic Analysis Pfail: Probability of Failure, Pfail = 1 - R PI: Prediction Interval PIRT: Phenomena Import

42、ance Ranking Table SAE J2940 Issued NOV2011 Page 25 of 31 QFD: Quality Function Deployment QOI: Output Quantity Of Interest R: Reliability, 0%R100% Ru: Reliability, from Uniform Distribution assumption, 0%R100% RMS: Root-Mean-Squares S: Strength (in the example for Fig. 1-2) S: Simulation Output val

43、ue of QOI SA: Sensitivity Analysis si: Sample standard deviation via statistical analysis (si=ifor very large samples) t: Students t tc: Small sample correction factor for standard uncertainty of the mean from sample standard uncertainty U: Total Uncertainty (at C=100% Confidence), see Fig. 2 ui: ui

44、=si, Component “i” standard uncertainty of the output QOI (uncorrected for sample size) uD: Data, or experimental test & measurement standard uncertainty of the output QOI uN: Numerical (Solution Verification) standard uncertainty of the output QOI uP: Parametric standard uncertainty as seen by the

45、model, not reality of the output QOI uM: Model Form standard uncertainty of the output QOI uT: Total standard uncertainty of the output QOI V&V: (Model) Verification & Validation X: Small sample correction factor for population standard uncertainty vs. sample standard uncertainty xi: Values for inpu

46、t parameter “x” in the set, i=1,N, of experimental tests xj: Test value for input parameter “x” xm: Mean value of the set of input parameters “x” in the set, i=1,N, of experimental tests : Reliability Index for use with the normal distribution : Reduced Chi-Square value : Limiting Deflection : Cumul

47、ative Standard Normal Function in () S: Mean Strength (in the example for Fig. 1-2) i: Component “i” standard uncertainty of the output QOI (corrected for sample size) D: Data, or experimental test & measurement standard uncertainty of the output QOI N: Numerical (Solution Verification) standard unc

48、ertainty of the output QOI P: Parametric standard uncertainty as seen by the model, not reality of the output QOI M: Model Form standard uncertainty of the output QOI T: Total standard uncertainty of the output QOI S: Standard uncertainty of the output QOI Strength (in the example for Fig. 1-2) SAE

49、J2940 Issued NOV2011 Page 26 of 31 APPENDIX B - ADDITIONAL CONSIDERATIONS ON MODEL FORM UNCERTAINTY UMA procedure for assessing the first three uncertainty terms, uD, uN, uPis provided in Appendix A and in (ASME, 2009)6. The fourth term, uM, can be assessed as discussed in Appendix A, using the following equations w

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 标准规范 > 国际标准 > 其他

copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
备案/许可证编号:苏ICP备17064731号-1