1、API PUBL*454b 92 = 0732290 0505438 771 W Hazard Response Modeling Uncertainty (A Quantitative Method) Volume II Evaluation of Commonly Used Hazardous Gas Dispersion Models HEALTH AND ENVIRONMENTAL SCIENCES API PUBLICATION NUMBER 4546 OCTOBER 1992 American Petmleum Institute 1220 L Street, Northwest
2、11 Washington, D.C. 20005 Hazardous Response Modeling Uncertainty (A Quantitative Method) Volume II Evaluation of Commonly Used Hazardous Gas Dispersion Models Prepared for: American Petroleum Institute Health and Environmental Sciences Department and Air Force Engineering and Services Center Tyndal
3、l Air Force Base PUBLICATION NUMBER 4546 OCTOBER 1992 PREPARED UNDER CONTRACT BY: SIGMA RESEARCH CORPORATION 196 BAKER AVENUE CONCORD, MASSACHUSETTS American Petroleum Institute APT PUBLWSU 92 0732290 0505W+O 32T FOREWORD API PUBLICATIONS NECESSARILY ADDRESS PROBLEMS OF A GENERAL NATURE. Wi“H RESPEC
4、T TO PARTICULAR CIRCUMSTANCES, LOCAL, STATE, AND EDEFUL, LAWS AND REGULATIONS SHOULD BE REVIEWED. TURERS, OR SUPPLERS ?io WARN AND PROPEZCY TRAIN AND EQUIP THEIR EMPLOYEES, AND OTHERS EXPOSED, CONCERNING AND SAFETY RISKS AND PRECAUTIONS, NOR UNDERTAKING “EX3 OBLIGAIIONS UNDER LOCAL, STWE, OR FEDERAL
5、 LAWS. NOTHING CONTAINED IN ANY API PUBLICATION IS To BE CONSTRUED AS API IS NOT UNDERTAKING TO THE DUTIES OF EMPLOYERS, MANUFAC- GRANTING ANY RIGHT, BY IMPLICATION OR OTHERWISE, FOR THE MANU- FACTURE, SALE, OR USE OF ANY METHOD, APPARATUS, OR PRODUCT COV- THE PUBLICATION BE CONSTRUED AS INSURING AN
6、YONE AGAINST LIABL- ERED BY LEMERS PATENT. NEITHER SHOULD ANYTHING CONTAINED IN ITY FOR I“GEMENT OF LEXES PAENT. ACKNOWLEDGMENTS THE FOLLOWING PEOPLE ARE RECOGNIZED FOR “HEIR CONTRIBUTIONS OF TIME AND EXPERTISE DURING THIS STUDY AND IN THE PREPARATION OF THIS EPRT API STAFF CONT ACTr Howard Feldman,
7、 Health and Environmental Sciences MEMBERS OF THE AIR MODELING TASK FORCE Kenneth Steinberg, Exxon Research and Enginering Company Thomas Baker, ARCO Oil and Gas Company Douglas Blewitt, Amoco Corporation Richard Carney, Phillips Petroleum Comany David Fontaine, Chevron Research and Technology Compa
8、ny Lee Gilmer, Texaco Research Marvin Hem, Shell Development Company Gilbert Jersey, Mobil Research and Development Company George Lauer, ARCO Robert Peace, Und WE ARE INDEBTED TO CAPTAIN MICHAEL MOSS, UNITED STATES AIR FORCE, FOR HIS CONSIDERABLE EFFORTS DURING THE DEVELOPMENT OF THIS PUBLICATION.
9、iii APZ PURL*454b 92 W O732290 0505442 IT2 M This volume of the final report provides documentation of some of the results of a two year project entitled Hazard ResDonse Modeliny Uncertainty (A Quantitative Methodl. tasks related to evaluating the performance of commonly used hazardous gas dispersio
10、n models is summarized. Work that has been accomplished on the technical work Eight datasets are used in the evaluation. Those field experiments that involve the release of dense-gas clouds are Burro, Coyote, Desert Tortoise, Goldfish, Maplin Sands, and Thorney Island. Those field experiments that i
11、nvolve the release of passive clouds are Hanford (Kr85 tracer studies) and Prairie Grass. Modelers Data Archive (MDA), and an extensive set of software was developed to prepare data-files for each model evaluated. Data from these experiments are placed in a common format as a Fourteen dispersion mod
12、els are evaluated, including six publicly= available computer models (AFTOX, DEGADIS, HEGADAS, I-, OB/DG, and SLAB) and six proprietary computer models (AIRTOX, CARM, FOCS, GACTAR, PHAST, and TRACEI. McQuaid) are also evaluated for comparative purposes. A simple Gaussian plume formula and a set of n
13、omograms (Britter and The statistical evaluation indicates that there are a few models that can successfully predict concentrations with a mean bias of 20 percent or less, a relative mean square error of 50 percent or less, and little variability of the residual errors with the input parameters Thes
14、e models are identified in Section VII. It is also clear that model performance is not dependent upon model complexity. It is necessary to point out that this evaluation exercise has been by no means independent, since all of the models have been previously tested by the developers with at least one
15、 of the datasets. Furthermore, some of the results may be fortuitous, since, in a few cases, certain models have been applied to source scenarios for which they were not originally intended. iv TABLE OF CONTENTS Sect ion Title Paqe EXECUTIVE SUMMARY ES-1 I . INTRODUCTION 1 A . OBJECTIVES . 1 II . II
16、I . B . BACKGROUND . 3 1 . EPA Model Evaluation Program 3 2 . Model Sensitivity Studies 5 Heavy Gas Dispersion Models . 6 5 . Comprehensive Model Evaluation Studies . 7 6 . CMA Model Evaluation Studies 8 C . SCOPE . 8 DATASETS . 11 A . CRITERIA FOR CHOOSING DATASETS 11 B . DESCRIPTION OF INDIVIDUAL
17、STUDIES 14 1 . Burro and Coyote . 14 2 . Desert Tortoise and Goldfish . 16 3 . Hanford Kr85 . 22 4 . Maplin Sands . 25 3 . Summary of Field Data 5 4 . A Methodology for Evaluating 5 . Prairie Grass . 28 6 . Thorney Island 32 C . CREATION OF A MODELERS DATA ARCHIVE . 36 D . METHODS FOR CALCULATING RE
18、QUIRED VARIABLES . . 39 1 . Burro . 40 2 . Coyote 41 3 . Desert Tortoise 41 4 . Goldfish . 42 5 . Hanford Kra5 43 6 . Maplin Sands . 46 7 . Prairie Grass . 47 8 . Thorney Island 48 E . SUMMARY OF DATASETS . 49 MODELS 53 A . CRITERIA FOR CHOOSING MODELS . 53 B . DESCRIPTION OF MODELS EVALUATED . 57 1
19、 . AFTOX 3.1 (Air Force Toxic Chemical Model . 58 2 . AIRTOX 60 4 . CHARM 6.1 (Complex and Hazardous Air Release Model . 65 5 . DEGADIS 2.1 (DEnse GAS DISpersion Model) . 67 3 . Britter ensor Array for the Desert Tortoise Series Experiments 20 ionfiguration of Meteorological Towers and Kr85 Detector
20、s for the Hanford Kr85 Trials (Reference 18) 24 Initial Configuration of the Maplin Sands Site 27 - Fisure 1. 2. 5. . 3. C 4. C 6. Revised Configuration of the Maplin Sands Site (After Trail 35) . 27 7. Con igi 9. 11. xation of Instrumentation Used for the Phase I Trials at Thorney Island (Reference
21、 25) . . 34 Correlation for continuous releases from Bratter and McQuaid (Reference 42) 64 Correlation for instantaneous release from Britter and McQuaid (Reference 42) 64 Model performance measures, Geometric Mean Bias MG = exp(PnC, - QnC,) and geometric variance VG = exp(QnC, - 4nC,12 for concentr
22、ation predictions and Observations for the continuous dense gas group of datasets (Burro, Coyote, Desert Tortoise, Goldfish, Maplin Sands, Thorney Island) . Ninety-five percent confidence intervals on MG are indicated. The solid line is the “minimum VG“ curve, from Equation (33). The dashed lines re
23、present factor of twoi1 agreement between mean predictions and observations . 127 Model performance measures, Geometric Mean Bias MG = exp(QnC, - QnC,) and geometric variance VG = exp(QnC, - QIIC,) for concentration predictions and observations for the instantaneous dense gas data from Thorney Islan
24、d. Ninety-five percent confidence intervals on MG are indicated. The solid line is the VG“ curve, from Equation The dashed lines represent “factor of 8. 10. _ two“ agreement between mean predictions and observations . . . . . . . . . . . . . . . . . . . 130 12. Model performance measures, Geometric
25、Mean 13. 14. 15. Bias MG = exp(nC, - nC,) and geometric variance VG = exp(QnC, - QIC,) for concentration predictions and observations at distances greater than or equal to 200 m. a) Continuous dense gas group of datasets (Burro, Coyote, Desert Tortoise, Goldfish, Maplin Sands, Thorney Island). b) In
26、stantaneous dense gas data from Thorney Island. Ninety-five percent confidence intervals on MG are indicated. The solid line is the “minimum VG“ curve, from Equation (33). The dashed lines represent “factor of two“ .agreement between mean predictions and observations 132 Model performance measures,
27、Geometric Mean Bias MG = exp(nC, - QnC,) and geometric variance VG = exp(QnC, - QIIC,) for concentration predictions and observations at distances less than 200 m. a) Continuous dense gas group of datasets (Burro, Coyote, Desert Tortoise, Goldfish, Maplin Sands, Thorney Island). b) Instantaneous den
28、se gas data from Thorney Island. Ninety-five percent confidence intervals on MG are indicated. The solid line is the “minimum VG“ curve, from Equation (33). The dashed lines represent “factor of two“ agreement between mean predictions and observations . . . . . 135 Model performance measures, Geomet
29、ric Mean Bias predi VG = MG = exp(QnC, - nC,) and geometric variance exp (Qnc, - Pc,) for concentration . vxi 2 i=l where Vxi is the uncertainty or variance in input variable xi. is a Taylor expansion and implicitly assumes that the individual uncertainties are much less than one. Carney (Reference
30、2) finds that the wind speed, u, contributes the most uncertainty to the concentration, C, predicted by the AFTOX model. This equation 3. Summary of Field Data Ermak et al. (Reference 5) has put together a comprehensive summary of 26 “bench mark“ field experiments, including data from Burro (LNG), C
31、oyote (LNG), Eagle (Nz041, Desert Tortoise (NH3). Maplin Sands (LNG and LPG) and Thorney Island (Freon). This study (funded by AFESC) presents input data 5 API PUEL*YiL!b 92 W 0732290 OSO54b5 OS4 required by models and includes observed peak concentrations, average centerline concentrations, and ave
32、rage height and width of the cloud as a function of downwind distance. anyone to run and evaluate his model. These data are sufficiently complete for 4. A Methodology for Evaluating Heavy Gas Dispersion Models In another recent draft report prepared for AFESC, Ermak and Merry (Reference 6) review me
33、thods for evaluating heavy gas dispersion models. first list several specific criteria of interest to the Air Force: They The methodology is to be based on comparison of model predictions with field-scale experimental observations. The methods of comparison must be quantitative and statistical in na
34、ture. . The methods must help identify limitations of the models and levels of confidence. The methodology must be compatible with atmospheric dispersion models of interest to the Air Force. These criteria are similar to those for our present study. The Ermak and Merry (Reference 6) report is a revi
35、ew of general evaluation methods and heavy gas model data sets, and does not contain examples of applications of any new evaluation methods with field data sets. They first review the general philosophy of moael evaluation, pointing out that sometimes evaluations of model physics are just as importa
36、nt as quantitative statistical evaluations. Much of their pnilosophical discussion follows the points made in a review paper by Venkatram (Reference 7). contains an irrational physical assumption (for example, dense gas plumes accelerate upward) is not a good model. Also, they recognize that most mo
37、del predictions represent ensemble averages, whereas field experiments represent only a single realization of the countless cata that make up an For example, a model whose Predictions agree with field data but which 6 API PUBL*4546 92 = 0732290 0505462 T90 ensemble. They emphasize that observed conc
38、entrations are strong functions of averaging time, and that most heavy gas dispersion models do not include the effects of averaging time. Heavy gas dispersion models are distinguished from other dispersion models by three effects: reduced turbulent mixing, gravity spreading, and lingering. The main
39、 parameters of interest in evaluations of these models are the maximum concentration, the average concentration over the cloud, and the cloud width and height (all as a function of downwind distance, x). Ernak and Merry emphasize the ratio of predicted to observed variables and define several statis
40、tics, such as the mean and the variance. Methods of estimating confidence limits on these statistics are suggested, and the report closes with an example of the application of some of their suggested procedures to a concocted data set drawn from a Gaussian distribution. 5. Comprehensive Model Evalua
41、tion Studies Mercers (Reference 8) review emphasizes estimation of variability or uncertainty in model predictions, which he finds is typically an order of magnitude when outliers are considered. Lamb (Reference 91, which is also appropriate for our discussion. He includes the following quote from “
42、The predictions even of a perfect model cannot be expected to agree with observations at all locations. of model validation should be one of determining whether observed concentrations fall within the interval indicated by the model with the frequency indicated, and if not, whether the failure is at
43、tributable to sampling fluctuations or is due to the failure of the hypotheses on which the model is based. From the standpoint of regulatory needs the utility of a model is measured partly by the width of the interval in which a majority of observations can be expected to fall. If the width of the
44、interval is very large, the model may provide no more information than one could gather simply by guessing the expected concentration. In particular, when the width of the interval of probable concentration values exceeds the allowable error bounds on the models predictions, the mode? is of no value
45、 in that particular application. ” Consequently, the common goal 7 API PUBL*QSLib 92 W 0732290 050SQb3 927 Mercer (Reference 8) then produces concentration predictions of ten different models for a dense gas source equivalent to that used in the Thorney Island experiments. predictions range over an
46、order of magnitude at any given downwind distance. This comparison shows that the 10 model 6. CMA Model Evaluation Program The Chemical Manufacturers Association (CMA) sponsored an evaluation of eight dense gas dispersion models and nine spill evaporation models (References 10 and 11). The authors r
47、an some of the models themselves and requested the developers of proprietary models to run their own models using standard input data sets. factor of two to five. The comparisons are clouded by the use of some data sets that had already been used to “tune“ certain of the models tested. Model uncerta
48、inty is typically a C. SCOPE This introductory section has provided an overview of the objectives of the entire project, which was initiated because there are no standard objective quantitative means of evaluating microcomputer-based hazard response models. sponsored wholly or in part by the U.S. Ai
49、r Force and the American Petroleum Institute: ADAM, AFTOX, CHARM, DEGADIS, SLAB, and OBAIG. A few data sets There are dozens of such models including several exist for testing these models, but, up until now, the models have not been tested or intercompared with these data on the basis of standard statistical significance tests. The U.S. EPA recently sponsored a related model evaluation project (Reference 11, which had a more limited scope and considered fewer models and datasets. In this volunie, we focus on a demonstration of the system to evaluate
copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
备案/许可证编号:苏ICP备17064731号-1