#### Evidential Model Validation under Epistemic Uncertainty

Evidential Model Validation under Epistemic Uncertainty
Wei Deng,1,2 Xi Lu,2 and Yong Deng1,2,3
1Institute of Fundamental and Frontier Science, University of Electronic Science and Technology of China, Chengdu 610054, China
2School of Computer and Information Science, Southwest University, Chongqing 400715, China
3Big Data Decision Institute, Jinan University, Guangzhou, Guangdong 510632, China
Correspondence should be addressed to Yong Deng; nc.ude.unj@gnedy
Received 25 September 2017; Accepted 28 January 2018; Published 22 February 2018
Academic Editor: Roman Wendner
Copyright © 2018 Wei Deng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
This paper proposes evidence theory based methods to both quantify the epistemic uncertainty and validate computational model. Three types of epistemic uncertainty concerning input model data, that is, sparse points, intervals, and probability distributions with uncertain parameters, are considered. Through the proposed methods, the given data will be described as corresponding probability distributions for uncertainty propagation in the computational model, thus, for the model validation. The proposed evidential model validation method is inspired by the idea of Bayesian hypothesis testing and Bayes factor, which compares the model predictions with the observed experimental data so as to assess the predictive capability of the model and help the decision making of model acceptance. Developed by the idea of Bayes factor, the frame of discernment of Dempster-Shafer evidence theory is constituted and the basic probability assignment (BPA) is determined. Because the proposed validation method is evidence based, the robustness of the result can be guaranteed, and the most evidence-supported hypothesis about the model testing will be favored by the BPA. The validity of proposed methods is illustrated through a numerical example.
1. Introduction
Complex real world phenomena have been increasingly being modeled by various sophisticated computational models with few or no full-scale experiments. Besides, due to the rapid development of computer technology, the model-based simulations are increasingly dominant in the design and analysis of complex engineering systems, therefore making the reduction of the cost and the time of engineering development depending on the understanding of these phenomena and full-scale testings. The quality of the model prediction is influenced by various sources of uncertainty such as model assumptions, solution approximations, variability uncertainty in model inputs, and parameters and data uncertainty due to sparse and imprecise information. When the model is used for system risk assessment or certification of reliability and safety under actual use conditions, it is crucial to quantify the uncertainty and confidence in the model prediction in order to help the risk-informed decision making, and hence the model needs to be subjected to rigorous, quantitative verification and validation (V&V) before it can be applied to practical problems with confidence [1]. Model validation is an important component of quantification of margins and uncertainties (QMU) analysis that is intimately connected with the assessment and representation of uncertainty [2]. The process of model validation measures the extent of the agreement between the model prediction and experimental observations [3]. Our work focuses on model validation under epistemic uncertainty in both the model inputs and the available observed experimental evidence.
Modeling complex systems are very complicated since there are many factors interacting with each other [4–8]. A key component of model validation is the specific, rigorous disposal of numerous sources and different types of uncertainty. The uncertainty can be roughly divided into two types: aleatory and epistemic. The term aleatory uncertainty describes the inherent variation of the physical system. Such variation is usually due to the random nature of the input data and can be mathematically represented by a probability distribution once enough experimental data is available. Epistemic uncertainty in nondeterministic systems arises due to ignorance, lack of knowledge, or incomplete information. These definitions are adopted from the papers by Oberkampf et al. [9–11]. Epistemic uncertainty regarding a variable can be of two types: a poorly known stochastic quantity [12] or a poorly known deterministic quantity [13]. In this paper, we are concerned only with the former type of epistemic uncertainty where sparse and imprecise information (i.e., sparse point data and/or interval data) is available regarding a stochastic quantity; as a result, the distribution type and/or the distribution parameters are uncertain. Both the model inputs and the observed experimental evidence are where the uncertainties come from. Previous studies about the treatment of epistemic uncertainty utilize methods such as evidence theory [14, 15], fuzzy sets [16], entropy model [17, 18], and convex models of uncertainty [19], intended basically for uncertainty quantification, and it is not clear how to implement model validation under epistemic uncertainty. The model validation method in this paper deals with the situation where the epistemic uncertainty arises from the sparse and imprecise data concerning the model inputs, and the input quantities as well as the observed experimental evidence are in the form of both point form and interval form, with the method of Dempster-Shafer (D-S) evidence theory.
Another issue to be addressed is that the final decision on model acceptance has to be made with discretion. Model uncertainty can be reduced through a deeper investigation or if there is some empirical information available about the system behavior, after which the model will be assessed and a decision can be made regarding acceptance or rejection of the model. This paper investigates the latter issue and has an assumption that the validity of a model is judged only by its output assuming that the investigation of its mechanism is unavailable. In the previous studies, Sankararaman et al. [20, 21] assess the validity of the model and make the decision using the Bayes factor, which is based on the idea of Bayesian hypothesis testing and is the ratio of the likelihood of the model prediction and the experimental observed data under two competing hypotheses: accept the model and refuse the model. However, they decide whether to accept the model solely by one Bayes factor calculation, which can be quite occasional and not precise enough.
Bayesian theory based information fusion techniques have been evolving in many fields; nonetheless, the effective performance can only be achieved if adequate and appropriate a priori and conditional probabilities are available, or the results of Bayesian methods can be imprecise, even far away from the truth. As an extension to Bayesian theory, the Dempster-Shafer (D-S) evidence theory [14] uses basic probability assignment (BPA) to quantify evidence and uncertainty [17, 22–26]. Compared with probability, BPA has the advantage of efficient modeling of unknown information [27]. D-S evidence theory models how the uncertainty of a given hypothesis or discourse diminishes as groups of BPAs accumulate during the reasoning process [27–30]. The effectiveness of this method has been demonstrated in many fields. In this paper, first, a method based on the concepts of D-S evidence theory is proposed to eliminate the parameters uncertainty of probability distributions for random input variables whose available information is in the form of sparse point data and interval data. In the case of point data, each of them is regarded as a BPA of the alternative input variable probability distribution parameters; while in the case of interval data, the BPAs are obtained by sampling some point data from the intervals. Through our method, each model input set including multiple point and/or interval data will be described using a probability density distribution function (PDF) before uncertainty propagation analysis and model validation. Considering the faithfulness to the available data, for the model data with little distribution parameter information, we also construct PDFs using a computational statistic technique, bootstrapping method, to address this concern. Moreover, in the model validation part, we also propose an evidential method; and the probability distribution of experimental evidence as well as that of the model prediction, which is derived from the uncertainty propagation technique, is involved. The proposed method borrows the idea of Bayesian hypothesis testing and the definition of Bayes factor in [20, 31] as reference, inspired by which, groups of BPAs for the decision making of model acceptance are generated and obtained.
The remainder of this paper is organized as follows to show the details about proposed methods. Section 2 provides an overview of basic concepts and notations about Dempster-Shafer evidence theory. Section 3 describes how the evidence based method can eliminate the parameter uncertainty of probability distribution (epistemic uncertainty) for model inputs. The validation method referred to the idea of Bayesian hypothesis testing, and Bayes factor is presented in Section 4. Illustration of the proposed methods is given through a steady state heat transfer problem in Section 5. Finally, we conclude our paper in Section 6.
2. Dempster-Shafer Theory of Evidence
The real world is full of uncertainty, and how to deal with the uncertain information is still an open issue [32–36]. Many math tools, such as AHP [37–40], fuzzy sets [41–43], D numbers [44–47], evidential reasoning [30, 48–51], and rough sets [52], are adopted with wide applications. In this section, the main concepts about the Dempster-Shafer (D-S) evidence theory are reviewed. The Dempster-Shafer evidence theory, which is introduced by Dempster [53] and then extended by Shafer et al. [14], is concerned with the question of belief in a proposition and systems of propositions. Compared with the Bayesian probability model, the merits of D-S evidence theory have already been recognized in various fields. First, the D-S evidence theory can handle uncertainty or imprecision embedded in the evidence [54]. In contrast to the Bayesian probability model in which probability masses can be only assigned to singleton subsets, in D-S evidence theory, probability masses can be assigned to both singletons and compound sets. Thus, more evidence will be provided to illustrate the hypotheses or the distribution between the singletons, and this theory can be viewed as a generalization of the classic probability theory. Second, in D-S evidence theory, no prior distribution is needed before the combination of BPAs from individual information sources. Third, the D-S evidence theory allows one to specify a degree of ignorance in some situations rather than the hypothesis that all have to be assigned precisely. Some notations concerned in D-S evidence theory are introduced. Finally, conflicting management is improved with evidence theory [55–57].
Definition 1 (frame of discernment). Let be a finite set of exhaustive and exclusive elements, and each is called a proposition or a hypothesis. The set is called the frame of discernment. Denote the power set of as , where denotes the empty set.
Definition 2 (mass function). For a frame of discernment , a mass function is a mapping , which is also called a basic probability assignment (BPA), satisfyingwhere is any element of and the BPA expresses how strongly the evidence supports a particular proposition .
Definition 3 (Dempster’s rule of combination). Suppose and are two BPAs formed based on the information obtained from two different information sources in the same frame of discernment ; the Dempster's rule of combination, also called the orthogonal sum, denoted by , is defined as follows:withwhere , , and are elements of and is a normalization constant and represents a basic probability mass associated with conflicts among the sources of evidence, called the conflict coefficient of two BPAs. The larger the value of is, the more conflicting the sources are and the less informative their combination is.
3. Probability Distributions of Model Input Data
The input of a mathematical or computational model is denoted by and the model prediction or, say, output is , and both and are the sets for the corresponding variables. Generally, there are three types of data in as mentioned in [20]:(a)sufficient data to construct a precise probability density distribution function (aleatory uncertainty);(b)sparse point data and/or interval data (epistemic uncertainty exists);(c)probability density distribution function with uncertain parameters (epistemic uncertainty).
Here, the paper presents an evidential method to represent the model input data and reduce the epistemic uncertainty in them as much as possible for applying the uncertainty propagation techniques to them and getting the model prediction and also for the model validation purpose. Through our method, each input set of the model can be described using a single probability density function (PDF) just as the content in Figure 1 [20, 58]. Since the scenario here is about epistemic uncertainty, a parameters set consisting of several alternative combinations of parameter statistics is provided by engineers and experts except for the inputs just as what is described in (c) mentioned above. Such epistemic uncertainty will be handled by the proposed evidence theory based method, while the computational statistic method is for the condition (b) mentioned above. The proposed method will be discussed in detail below with sparse point data and interval data.
Figure 1: The analysis of model input data.
3.1. Evidential Probability Distribution of Data with Epistemic Uncertainty
The methods that have been developed for model validation are mostly in the cases where the experimental evidence is in the form of point data. However, there are some cases where the interval data exist as the experimental evidence rather than the point data [59–61]. For convenience, the format of the obtained sparse model input data in this paper is expressed as point data (suppose to ) and interval data (suppose to ). As mentioned above, the alternative PDF parameter set is similarly expressed as and is the parameter set of the known probability density distribution function type such as exponential, normal, lognormal, and Poisson, for example, the mean and the standard deviation of a normal distribution. The PDF of is represented by . It is noteworthy that the PDF for the model input data is conditioned on the decision of in the set [62–65]; therefore, every element in the set should be evaluated by the available evidence, that is, the obtained model input data. The basic assignment probability (BPA) of each element of could be derived according to the principle that the more optimal (or, say, more close to the real PDF of model inputs) an alternative PDF is, the higher the value ( to ) is. A good and useful BPA should contain most of the information provided by the data source and should be suitable for the subsequent process (the combination of BPAs and decision making). To solve this problem, the proposed method incorporates the available model input data with the alternative PDFs to extract the BPA information sufficiently.
The normal distribution is commonly encountered in practice and is used throughout statistics, natural sciences, and social sciences as a simple model for complex phenomena. To explore the details, here we would like to use , ( to ) as illustration. Some details of obtaining BPAs with the available model input data are discussed as follows.
3.1.1. Step 1
Above all, given the normal PDF parameter set provided by experts and engineers, the corresponding normal PDFs can be determined and curves can be drawn. Each PDF will be coded as a proposition of the frame of discernment in D-S evidence theory; that is, .
3.1.2. Step 2
In this step, the BPAs will be derived from both the model input data and the normal PDFs. For the point data ( to ), we determine the BPA with the help of mean value theorem and regard observing as approximately equal to observing , where is an infinitesimally small positive number. In such way, we can obtain a BPA from the th evidence source (, to ):that is, calculate the intersections of and the normal PDFs:Thus we can get intersections of the alternative PDFs with uncertain parameters. Besides,where and the BPA values of the remaining propositions in the power set of are set to be equal to 0. In the evidential background, is regarded as the quantified expression of the global ignorance about the optimal model input PDF parameters. As for the interval data ( to ), due to the fact that is usually rather small, we just discretize each interval and sample a certain amount of points in the interval to replace the interval. The BPA generating process is the same as what has just been mentioned above for point data. By this means that we can get another certain-constant-times groups of BPAs, all of which will be used to help make the final decision about the optimal model input PDF.
3.1.3. Step 3
Based on the work that has been done in step , we combine all the obtained BPAs (that is, multiple mass functions) using the Dempster's rule of combination as shown in (2) and (3). According to the fused result, the proposition ( to ), whose final function value is the largest, is the evidence-supported optimal distribution parameter choice in the decision making. Thus, the approximate real model input PDF is estimated as the normal distribution with parameters . It is noteworthy that if turns out to be the largest, then we would conclude that the alternative distributions are too similar to discriminate, or the given data is too sparse to be used for making a choice. Here, the engineers and experts who have provided are assumed to be reliable.
3.2. Probability Distribution without Uncertain Parameters
Different from the situation in Section 3, assume another situation where there are only the sparse model input data denoted as the set , and the PDF needs to be estimated. Similarly, the model input data set includes both points ( to ) and intervals ( to ). Because the sample size is limited and not so large, an empirical distribution for is hard to be constructed. However, the basic idea of the bootstrapping method [66] is that the inference about a population from sample data (sample population) can be modeled by resampling the sample data (sampling with replacement) and performing inference on the resampled data (resample sample). This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods [67]. With the prior knowledge about the distribution type, we can estimate the distribution by using the bootstrapping method to determine the parametric statistics. Given both the point and the interval data, the intervals are first discretized into a finite number of points and the application of bootstrapping method follows; then the statistic (the mean and standard deviation of the population) should be estimated (central limit theorem) and be used for the construction of the PDF of model input.
4. Model Validation
Through the above sections, the model input is represented using a single probability distribution. Propagating the input uncertainty through the mathematical model and determining the PDF of the model output is the following work. Given the statistical distributions of the input variables, various methods are available to carry out probabilistic analysis so as to quantify the uncertainty in the model output such as Monte Carlo simulation [68] and response surface methods [69, 70]. The choice of method depends on the nature of model used for predicting the output and the needs with respect to accuracy and efficiency. In this case, the model prediction will be in a probabilistic distribution form; therefore the model validation is about the comparison of the observed experimental evidence and the probability distribution of the model output . This section implements the evidential model validation metric inspired by the idea of Bayesian hypothesis testing and Bayes factor.
4.1. Bayes Factor
Bayesian hypothesis testing estimates the probability of a hypothesis, given the observed experimental evidence . Bayesian methods may use Bayes factors to compare hypotheses, which are introduced by Jeffreys [71]. A Bayes factor, , is a Bayesian alternative to frequentist hypothesis testing that is most often used for the comparison of multiple models, usually to determine which model better fits the experimental evidence; it is in the form that the posterior odds in favor of the hypothesis are divided by the prior odds in favor of the hypothesis. Generally, there are two hypotheses and ; the related probabilities of these hypotheses can be updated using the Bayes theorem as [72]The first term on the right hand side of (8) is the Bayes factor . In the context of model of validation, the two hypotheses and may be defined as “the model is correct” and “the model is incorrect,” respectively.
4.2. Evidential Model Acceptance Decision Making
Consider a provided model input data set; the model predicts an output set which consists of single deterministic quantities, each of which is corresponding to a specific input data circumstances and a measured experimental dataset . Inspired by the developed Bayes factor in [20],where the reason why (9) is in that format is that “the epistemic uncertainty in the model inputs and the validation data have already been converted in to probabilistic information” by the authors [20], in this paper, we use the bootstrapping method to estimate the statistics in order to infer the PDF of the observed experimental evidence (), so as to replace the numerator in (9). And by our redefined equation (9), we can get two values: the numerator and the denominator. Accordingly, we constitute the frame of discernment under the target of making decision about model acceptance with two propositions (“the model is correct” () and “the model is incorrect” ()) such that . As for the BPA value of each proposition, the two curves in Figure 2 are given for illustration and the two intersection points, and , are what should be paid attention to. In the background of D-S evidence theory, we define the BPA values of propositions in aswhereHence, one BPA is obtained and is the quantification of the ignorance about the model testing. After obtaining all groups of BPAs, combine them all using the Dempster's rule of combination as shown in (2) and (3) such that . According to the fused result, the three propositions , , and which own the largest BPA values affect the final result of model testing; if it is or , then the result is self-evident, yet if turns out to be the largest, then we could conclude that the given data is too sparse to be used for making a decision and other methods need to be employed. It should be mentioned that the evidence distance may be another alternative to replace the Bayes factor [73], which is still under research.
Figure 2: The illustration figure for the BPA generation.
Dempster-Shafer evidence theory is concerned with the question of belief in a proposition and systems of propositions, which is capable of converging most BPA values to the dominant proposition quickly. The Dempster's rule of combination is robust when there are incidental disagreements between model prediction and its relative measured evidence (the reasons could be measurement error, sparse evidence, etc.). In this case, the Bayes factor in (9) could be unreliable and unprecise whether merely computing once or averaging several results in the model testing, while the proposed evidence theory based method can handle such conditions properly. In particular, the BPAs reflect the partial lack of information available for decision making which could be used for rejecting model under consideration if the relative uncertainty is too high.
5. Numerical Example
In this section, a numerical example that depicts the proposed methods for model validation is presented. For the purpose of illustration, various types of epistemic uncertainty, that is, sparse point data, interval data, and distribution parameter uncertainty, are arranged to appear simultaneously in the model input data. Moreover, the experimental observations include both point data and interval data. In addition, the units of following variables are omitted to simplify the illustration.
5.1. Problem Description and the Data
The steady state heat transfer in a thin wire of length , with thermal conductivity and convective heat coefficient , is of interest. The temperature prediction at the midpoint of the wire is desired. The condition of this problem is assumed to be essentially one-dimensional, and it is assumed that the solution can be obtained from the boundary value problem [20, 21]:Rebba et al. [21] assumed that the temperatures at the ends of the wire are both zero (i.e., ) and [74] is the heat source; these conditions used in this paper are of an ideal scenario. The length of the wire is deterministic and . It is desired to predict .
For the sake of illustration, and are random variables here and the PDF of the conductivity of the wire is assumed to be normal but with uncertain distribution parameters, which would be provided several alternatives by experts and engineers as shown in Table 1, which displays a parameter set consisting of five alternative parametric statistics combinations. The input model data concerning is two intervals, and , and three points data, 4.98, 5.21, and 5.02. Besides, suppose that the distribution for the convective heat coefficient is normal but not available yet. Instead, it is described using two intervals, and , and two points data, 0.58 and 0.52.
Table 1: The alternative parameters provided by experts and engineers.
Suppose, for given end temperatures of the wire and the model parameters and , the numerical model (14) predicts a set of temperatures for . The wire made with properties and having the same measured values as input to the numerical model is tested three times repeatedly to measure the temperature at location , and measured temperatures are different in each experiment. Here, assume that the measurements are in the following form: , , and . It is required to assess whether the observed experimental evidence supports the numerical model in (14). Various steps involved in the validation procedure are provided with details below.
5.2. Probability Distribution of Model Input Data
As what has already been given in Section 3, the first thing we should embark on is to describe the provided model input data using proper PDF. For this numerical example, the PDF of the conductivity of the wire is described by the experts and engineers as normal but with uncertain distribution parameters, that is, a normal distribution conditioned on the given set of alternative parameters. Thus there are five alternative PDF curves, depicted in Figure 3, for using input data to constitute groups of BPAs for eliminating the parametric uncertainty in PDF of . Figure 3(b) is the partial enlarged view for the intersections of provided data and the curves, from which the BPA values are determined for information fusion and decision making. The generated BPAs for combining are displayed in Table 2 ( represents the BPA generated from the leftmost point on the -axis and the rest are in a similar fashion). The fusion result is shown in Table 3 and the curves are coded as numbers 1 to 5 like in Table 1, from which we can conclude that the rank of five alternative PDF is and the approximate real PDF of the thermal conductivity of the wire () is the normal distribution with parameter combination , that is, (, ), that is, the red curve in Figure 3(a).
Table 2: The BPA generated from evidence.
Table 3: The fused result of PDF choice for .
Figure 3: Illustration for evidential representation of epistemic model input data.
Since there is little information about the distribution of the convective heat coefficient of the wire and there are two intervals, and , and two points values, 0.58 and 0.52, which is the situation talked about in Section 3.2, discretization and bootstrapping method come to help. We discretize the entire data into 1000 points and estimate the statistic parameters of the normal distribution of ; the PDF curve is shown in Figure 4.
Figure 4: The bootstrapping estimated PDF for .
5.3. Probability Distribution of Model Prediction
Now, since random variables and are both described in the PDF form, we come to the point where the uncertainty propagation technique needs to be applied, in order to estimate the distribution of which is also a random variable. With what has been discussed in Section 4, the uncertainty propagation technique utilized here is Monte Carlo simulation method and all those input PDFs are calculated by the computational model in (14). The model predicted PDF of is obtained and shown in Figure 6 as the red curve having a lognormal distribution with mean 17.12 and variance 10.042, namely, the PDF to be evaluated in the evidential model validation below.
5.4. Evidential Model Validation
Given the model predictions and the three groups of corresponding measured experimental evidence, the next thing needed to do is estimating the PDFs of them (, to 3) through the bootstrapping method, and the results are depicted in Figure 5. Then we integrate the ( to 3) and in the same picture shown in Figure 6. By the indication of red dotted lines on it, accordingly, three tuples of values can be derived to generate the BPAs of three groups of BPAs for information fusion and model decision making (shown in Table 4), and the calculation follows the instruction of (10)–(13). The fusion result is displayed in Table 5, and obviously the result shows that the experimental evidence agrees with the model, and the computational model in (14) is acceptable.
Table 4: The generated BPA for model acceptance decision making.
Table 5: The fused result of model acceptance decision making.
Figure 5: Distributions of the measured experimental evidence.
Figure 6: The illustration figure for evidential model validation.
6. Conclusion
This paper proposes an evidence theory based method to quantify the epistemic uncertainty, which includes three types of epistemic uncertainty concerning input model data, that is, sparse point data, interval data, and probability distributions with uncertain parameters, in the computational model. We also develop an evidential method for model validation, inspired by the idea of Bayesian hypothesis testing and Bayes factor, which compares the model predictions with the measured experimental data so as to assess the predictive capability of the model and help the decision making of model acceptance. The bootstrapping technique in statistics is also used to estimate the statistics in order to infer the probability density function (PDF) of the data involved in the model validation process. Through the proposed methods, the given data both in the point and interval form will be regarded as random variables and described by the corresponding probability distributions for the uncertainty propagation in the computational model, thus, for the model validation. Developed by the idea of Bayes factor, the frame of discernment of D-S evidence theory is constituted and the basic probability assignment (BPA) is determined. Because the proposed validation method is evidence based, the robustness of the result can be guaranteed, and the most evidence-supported hypothesis about the model testing will be favored by the BPA, thus helping the decision making of model acceptance.
A numerical example about the prediction of the wire middle temperature demonstrates that the proposed methods can effectively handle the epistemic uncertainty in the context of model validation. The further work needs to develop the methods for a more complex scenario of model validation where both the epistemic and aleatory uncertainty, as well as more types of epistemic uncertainty such as qualitative model data and categorical variables in the model, are taken into consideration.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Authors’ Contributions
Wei Deng and Xi Lu contributed equally to this study.
Acknowledgments
This work is partially supported by the National Natural Science Foundation of China (Grants nos. 61573290, 61503237).
References W. L. Oberkampf and M. F. Barone, “Measures of agreement between computation and experiment: Validation metrics,” Journal of Computational Physics, vol. 217, no. 1, pp. 5–36, 2006. View at Publisher · View at Google Scholar · View at ScopusJ. C. Helton, “Conceptual and computational basis for the quantification of margins and uncertainty,” Tech. Rep., Sandia National Laboratories, 2009. View at Publisher · View at Google ScholarW. L. Oberkampf, M. Sindir, and A. Conlisk, Guide for the verification and validation of computational fluid dynamics simulations, Am. Institute of Aeronautics and Astronautics, 1998. J. Liu, F. Lian, and M. Mallick, “Distributed compressed sensing based joint detection and tracking for multistatic radar system,” Information Sciences, vol. 369, pp. 100–118, 2016. View at Publisher · View at Google Scholar · View at MathSciNet · View at ScopusF. Xiao, M. Aritsugi, Q. Wang, and R. Zhang, “Efficient processing of multiple nested event pattern queries over multi-dimensional event streams based on a triaxial hierarchical model,” Artificial Intelligence in Medicine, vol. 72, pp. 56–71, 2016. View at Publisher · View at Google Scholar · View at ScopusY. Hu, F. Du, and H. L. Zhang, “Investigation of unsteady aerodynamics effects in cycloidal rotor using RANS solver,” The Aeronautical Journal, vol. 120, no. 1228, pp. 956–970, 2016. View at Publisher · View at Google Scholar · View at ScopusF. Xiao, C. Zhan, H. Lai, L. Tao, and Z. Qu, “New parallel processing strategies in complex event processing systems with data streams,” International Journal of Distributed Sensor Networks, vol. 13, no. 8, pp. 1–15, 2017. View at Publisher · View at Google Scholar · View at ScopusY. Dong, J. Wang, F. Chen, Y. Hu, and Y. Deng, “Location of facility based on simulated annealing and "ZKW" algorithms,” Mathematical Problems in Engineering, vol. 2017, Article ID 4628501, 2017. View at Google ScholarW. L. Oberkampf, J. C. Helton, and K. Sentz, “Mathematical representation of uncertainty,” in Proceedings of the 19th AIAA Applied Aerodynamics Conference, vol. 2001-1645, pp. 16–19, Anaheim, CA, USA, 2001. View at Publisher · View at Google ScholarW. L. Oberkampf, S. M. DeLand, B. M. Rutherford, K. V. Diegert, and K. F. Alvin, “New methodology for the estimation of total uncertainty in computational simulation,” in Proceedings of the 1999 AIAA/ASME/ASCE/AHS/ASC Structrures, Structural Dynamics, and Materials Conference and Exhibit, pp. 3061–3083, April 1999. View at ScopusL. P. Swiler, T. L. Paez, and R. L. Mayes, “Epistemic uncertainty quantification tutorial,” in Proceedings of the 27th International Modal Analysis Conference (IMAC '09), Society for Experimental Mechanics, Orlando, Fla, USA, February 2009. View at ScopusC. Baudrit and D. Dubois, “Practical representations of incomplete probabilistic knowledge,” Computational Statistics & Data Analysis, vol. 51, no. 1, pp. 86–108, 2006. View at Publisher · View at Google Scholar · View at ScopusJ. C. Helton, J. D. Johnson, and W. L. Oberkampf, “An exploration of alternative approaches to the representation of uncertainty in model predictions,” Reliability Engineering & System Safety, vol. 85, no. 1, pp. 39–71, 2004. View at Publisher · View at Google Scholar · View at ScopusG. Shafer et al., A Mathematical Theory of Evidence, Princeton University Press, Princeton, NJ, USA, 1976. View at MathSciNetH. Agarwal, J. E. Renaud, E. L. Preston, and D. Padmanabhan, “Uncertainty quantification using evidence theory in multidisciplinary design optimization,” Reliability Engineering & System Safety, vol. 85, no. 1, pp. 281–294, 2004. View at Publisher · View at Google Scholar · View at ScopusS. S. Rao and K. K. Annamdas, “Evidence-based fuzzy approach for the safety analysis of uncertain systems,” AIAA Journal, vol. 46, no. 9, pp. 2383–2387, 2008. View at Publisher · View at Google Scholar · View at ScopusQ. Zhang, M. Li, and Y. Deng, “Measure the structure similarity of nodes in complex networks based on relative entropy,” Physica A: Statistical Mechanics and its Applications, vol. 491, pp. 749–763, 2018. View at Publisher · View at Google Scholar · View at MathSciNet · View at ScopusL. Yin and Y. Deng, “Measuring transferring similarity via local information,” Physica A: Statistical Mechanics and its Applications, vol. 498, pp. 102–115, 2018. View at Publisher · View at Google ScholarY. Ben-Haim and I. Elishakoff, Convex Models of Uncertainty in Applied Mechanics, Elsevier, 2013. S. Sankararaman and S. Mahadevan, “Model validation under epistemic uncertainty,” Reliability Engineering & System Safety, vol. 96, no. 9, pp. 1232–1241, 2011. View at Publisher · View at Google Scholar · View at ScopusR. Rebba, S. Mahadevan, and S. Huang, “Validation and error estimation of computational models,” Reliability Engineering & System Safety, vol. 91, no. 10, pp. 1390–1397, 2006. View at Publisher · View at Google Scholar · View at ScopusJ. A. Barnett, “Computational methods for a mathematical theory of evidence,” in Classic Works of the Dempster-Shafer Theory of Belief Functions, pp. 197–216, Springer, 2008. View at Google ScholarY. Deng, “Deng entropy,” Chaos, Solitons & Fractals, vol. 91, pp. 549–553, 2016. View at Publisher · View at Google ScholarW. Jiang and J. Zhan, “A modified combination rule in generalized evidence theory,” Applied Intelligence, vol. 46, no. 3, pp. 630–640, 2017. View at Publisher · View at Google Scholar · View at ScopusJ. Abelln, “Analyzing properties of Deng entropy in the theory of evidence,” Chaos Solitons & Fractals, vol. 95, pp. 195–199, 2017. View at Google ScholarX. Zheng and Y. Deng, “Dependence assessment in human reliability analysis based on evidence credibility decay model and IOWA operator,” Annals of Nuclear Energy, vol. 112, pp. 673–684, 2018. View at Publisher · View at Google ScholarH. Zheng and Y. Deng, “Evaluation method based on fuzzy relations between Dempster-Shafer belief structure,” International Journal of Intelligent Systems, 2017. View at Google ScholarF. Ye, J. Chen, Y. Li, and J. Kang, “Decision-making algorithm for multisensor fusion based on grey relation and DS evidence theory,” Journal of Sensors, vol. 2016, pp. 1–11, 2016. View at Publisher · View at Google ScholarH. Xu and Y. Deng, “Dependent Evidence Combination Based on Shearman Coefficient and Pearson Coefficient,” IEEE Access, 2018. View at Publisher · View at Google ScholarT. Liu, Y. Deng, and F. Chan, “Evidential Supplier Selection Based on DEMATEL and Game Theory,” International Journal of Fuzzy Systems, 2017. View at Publisher · View at Google ScholarR. Zhang and S. Mahadevan, “Bayesian methodology for reliability model acceptance,” Reliability Engineering & System Safety, vol. 80, no. 1, pp. 95–103, 2003. View at Publisher · View at Google Scholar · View at ScopusC. Li, S. MahaDeVan, Y. Ling, S. Choze, and L. Wang, “Dynamic Bayesian network for aircraft wing health monitoring digital twin,” AIAA Journal, vol. 55, no. 3, pp. 930–941, 2017. View at Publisher · View at Google Scholar · View at ScopusS. Nath and B. Sarkar, “Performance evaluation of advanced manufacturing technologies: A De novo approach,” Computers & Industrial Engineering, vol. 110, pp. 364–378, 2017. View at Publisher · View at Google Scholar · View at ScopusD. Meng, H. Zhang, and T. Huang, “A concurrent reliability optimization procedure in the earlier design phases of complex engineering systems under epistemic uncertainties,” Advances in Mechanical Engineering, vol. 8, no. 10, pp. 1–8, 2016. View at Publisher · View at Google Scholar · View at ScopusB. Kang, G. Chhipi-Shrestha, Y. Deng, K. Hewage, and R. Sadiq, “Stable strategies analysis based on the utility of Z-number in the evolutionary games,” Applied Mathematics Computation, vol. 324, pp. 202–217, 2018. View at Publisher · View at Google ScholarY. Yang, G. Xie, and J. Xie, “Mining important nodes in directed weighted complex networks,” Discrete Dynamics in Nature and Society, vol. 2017, Article ID 9741824, pp. 1–7, 2017. View at Publisher · View at Google Scholar · View at ScopusR. K. Goyal and S. Kaushal, “A constrained non-linear optimization model for fuzzy pairwise comparison matrices using teaching learning based optimization,” Applied Intelligence, pp. 1–10, 2016. View at Publisher · View at Google Scholar · View at ScopusX. Zhou, X. Deng, Y. Deng, and S. Mahadevan, “Dependence assessment in human reliability analysis based on D numbers and AHP,” Nuclear Engineering and Design, vol. 313, pp. 243–252, 2017. View at Publisher · View at Google Scholar · View at ScopusX. Zhou, Y. Hu, Y. Deng, F. T. Chan, and A. Ishizaka, “A DEMATEL-based completion method for incomplete pairwise comparison matrix in AHP,” Annals of Operations Research, 2016. View at Publisher · View at Google ScholarX. Deng and Y. Deng, “D-AHP method with different credibility of information,” Soft Computing, 2018. View at Publisher · View at Google ScholarR. Zhang, B. Ashuri, and Y. Deng, “A novel method for forecasting time series based on fuzzy logic and visibility graph,” Advances in Data Analysis and Classification. ADAC, vol. 11, no. 4, pp. 759–783, 2017. View at Publisher · View at Google Scholar · View at MathSciNetH. Zheng, Y. Deng, and Y. Hu, “Fuzzy evidential influence diagram and its evaluation algorithm,” Knowledge-Based Systems, vol. 131, pp. 28–45, 2017. View at Publisher · View at Google ScholarL. Wu, J.-F. Liu, J.-T. Wang, and Y.-M. Zhuang, “Pricing for a basket of LCDS under fuzzy environments,” SpringerPlus, vol. 5, no. 1, article no. 1747, 2016. View at Publisher · View at Google Scholar · View at ScopusF. Xiao, “An Intelligent Complex Event Processing with D Numbers under Fuzzy Environment,” Mathematical Problems in Engineering, vol. 2016, Article ID 3713518, 2016. View at Publisher · View at Google Scholar · View at ScopusG. Fan, D. Zhong, F. Yan, and P. Yue, “A hybrid fuzzy evaluation method for curtain grouting efficiency assessment based on an AHP method extended by D numbers,” Expert Systems with Applications, vol. 44, pp. 289–303, 2016. View at Publisher · View at Google Scholar · View at ScopusH. Mo and Y. Deng, “A new aggregating operator for linguistic information based on D numbers,” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 24, no. 6, pp. 831–846, 2016. View at Publisher · View at Google Scholar · View at MathSciNet · View at ScopusT. Bian, H. Zheng, L. Yin, and Y. Deng, “Failure mode and effects analysis based on,” in Proceedings of the Dnumbers and TOPSIS, Quality and Reliability Engineering International, 2018. D.-L. Xu, “An introduction and survey of the evidential reasoning approach for multiple criteria decision analysis,” Annals of Operations Research, vol. 195, pp. 163–187, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at ScopusC. Fu and S. Yang, “Conjunctive combination of belief functions from dependent sources using positive and negative weight functions,” Expert Systems with Applications, vol. 41, no. 4, pp. 1964–1972, 2014. View at Publisher · View at Google Scholar · View at ScopusF. Xiao, “A Novel Evidence Theory and Fuzzy Preference Approach-Based Multi-Sensor Data Fusion Technique for Fault Diagnosis,” Sensors, vol. 17, no. 11, p. 2504, 2017. View at Publisher · View at Google ScholarC. Fu, J.-B. Yang, and S.-L. Yang, “A group evidential reasoning approach based on expert reliability,” European Journal of Operational Research, vol. 246, no. 3, pp. 886–893, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at ScopusM. Aggarwal, “Rough Information Set and Its Applications in Decision Making,” IEEE Transactions on Fuzzy Systems, vol. 25, no. 2, pp. 265–276, 2017. View at Publisher · View at Google Scholar · View at ScopusA. P. Dempster, “Upper and lower probabilities induced by a multivalued mapping,” Annals of Mathematical Statistics, vol. 38, pp. 325–339, 1967. View at Publisher · View at Google Scholar · View at MathSciNetF. Li, X. Zhang, X. Chen, and Y. C. Tian, “Adaptive and robust evidence theory with applications in prediction of floor water inrush in coal mine,” Transactions of the Institute of Measurement & Control, vol. 39, no. 1, Article ID 014233121668781, 2017b. View at Google ScholarY. Zhao, R. Jia, and P. Shi, “A novel combination method for conflicting evidence based on inconsistent measurements,” Information Sciences, vol. 367-368, pp. 125–142, 2016. View at Publisher · View at Google Scholar · View at ScopusF. Xiao, “An improved method for combining conflicting evidences based on the similarity measure and belief function entropy,” International Journal of Fuzzy Systems, 2017. View at Google ScholarW. Bi, A. Zhang, and Y. Yuan, “Combination method of conflict evidences based on evidence similarity,” Journal of Systems Engineering and Electronics, vol. 28, no. 3, Article ID 7978023, pp. 503–513, 2017. View at Publisher · View at Google Scholar · View at ScopusC. Li and S. Mahadevan, “Relative contributions of aleatory and epistemic uncertainty sources in time series prediction,” International Journal of Fatigue, vol. 82, pp. 474–486, 2016. View at Publisher · View at Google Scholar · View at ScopusS. Ferson, V. Kreinovich, J. Hajagos, W. Oberkampf, and L. Ginzburg, Experimental uncertainty estimation and statistics for data having interval uncertainty, Sandia National Laboratories, Albuquerque, NM, USA, 2007. C. Li and S. Mahadevan, “Role of calibration, validation, and relevance in multi-level uncertainty integration,” Reliability Engineering & System Safety, vol. 148, pp. 32–43, 2016لا. View at Publisher · View at Google Scholar · View at ScopusX. Du, A. Sudjianto, and B. Huang, “Reliability-based design with the mixture of random and interval variables,” Journal of Mechanical Design, vol. 127, no. 6, pp. 1068–1076, 2005. View at Publisher · View at Google Scholar · View at ScopusO. P. Le Matre and O. M. Knio, Introduction: Uncertainty Quantification and Propagation, Springer, 2010. M. McDonald and S. Mahadevan, “Uncertainty quantification and propagation for multidisciplinary system analysis,” in Proceedings of the 12th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference (MAO '08), 12, p. 9, Victoria, BC, Canada, September 2008. View at ScopusC. Li and S. Mahadevan, “An efficient modularized sample-based method to estimate the first-order Sobol index,” Reliability Engineering & System Safety, vol. 153, pp. 110–121, 2016c. View at Publisher · View at Google Scholar · View at ScopusS. Wojtkiewicz, M. Eldred, R. Field, A. Urbina, and J. Red-Horse, Uncertainty Quantification in Large Computational Engineering Models, American Institute of Aeronautics and Astronautics 14, 2001. B. Efron, “Bootstrap methods: another look at the jackknife,” The Annals of Statistics, vol. 7, no. 1, pp. 1–26, 1979. View at Publisher · View at Google Scholar · View at MathSciNetH. Varian, “Bootstrap tutorial,” Mathematica Journal, vol. 9, no. 4, pp. 768–775, 2005. View at Google ScholarR. L. Iman, “A Distribution-Free Approach to Inducing Rank Correlation Among Input Variables,” Communications in Statistics - Simulation and Computation, vol. 11, no. 3, pp. 311–334, 1982. View at Publisher · View at Google Scholar · View at ScopusS. S. Isukapalli and P. G. Georgopoulos, “Computational methods for efficient sensitivity and uncertainty analysis of models for environmental and biological systems,” in Computational Chemodynamics Laboratory Environmental and Occupational Health Sciences Institute 170, 2001. View at Google ScholarG. I. Schuëller, C. G. Bucher, U. Bourgund, and W. Ouypornprasert, “On efficient computational schemes to calculate structural failure probabilities,” in Stochastic Structural Mechanics, pp. 388–410, Springer, 1987. View at Google ScholarH. Jeffreys, Theory of probability, 1961. T. Leonard and J. S. Hsu, Bayesian methods: an analysis for statisticians and interdisciplinary researchers, vol. 5, Cambridge University Press, 1999. H. Mo, X. Lu, and Y. Deng, “A generalized evidence distance,” Journal of Systems Engineering and Electronics, vol. 27, no. 2, Article ID 7514435, pp. 470–476, 2016. View at Publisher · View at Google Scholar · View at ScopusA. Urbina, T. L. Paez, T. K. Hasselman, G. Wije Wathugala, and K. Yap, “Assessment of model accuracy relative to stochastic system behavior,” in Proceedings of the 44th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference 2003, pp. 7–10, April 2003. View at Scopus