Publications Details
A Framework for Model Validation
Computational models have the potential of being used to make credible predictions in place of physical testing in many contexts, but success and acceptance require a convincing model validation. In general, model validation is understood to be a comparison of model predictions to experimental results but there appears to be no standard framework for conducting this comparison. This paper gives a statistical framework for the problem of model validation that is quite analogous to calibration, with the basic goal being to design and analyze a set of experiments to obtain information pertaining to the `limits of error' that can be associated with model predictions. Implementation, though, in the context of complex, high-dimensioned models, poses a considerable challenge for the development of appropriate statistical methods and for the interaction of statisticians with model developers and experimentalists. The proposed framework provides a vehicle for communication between modelers, experimentalists, and the analysts and decision-makers who must rely on model predictions.