<< Click to Display Table of Contents >> Model calibration |
![]() ![]() ![]() |
The previous section discusses the role of modelling in decision-support. It stresses the importance of uncertainty analysis. It shows how the uncertainties of some predictions can be reduced through history-matching. Uncertainty reduction can be described using Bayes equation.
So in environmental modelling we start off with uncertainty and we finish with uncertainty. The uncertainty that we start off with can be characterised using prior parameter and predictive probability distributions. Post history-matching uncertainties can be characterised using posterior parameter and predictive probability distributions. The latter are often estimated by sampling them.
So where does the notion of "model calibration" fit into this? "Calibration" implies that a model should be endowed with a single set of parameters that allow it to replicate the past. More than this, it implies that this single set of parameters can be used to make reliable predictions of many different types of future system behaviour. This idea seems to violate the precepts of Bayes equation. Furthermore, it seems to be a sure path to decision-support modelling failure.
It follows that the "calibrated model" should never comprise the end-point of decision-support modelling. This is despite the fact that contracts worth hundreds of thousands of dollars are often signed with "the calibrated model" as the deliverable.
Nevertheless, model calibration can often serve a useful role in decision-support modelling. Here we examine what calibration is, and what role it can play.
As in other sections that comprise this set of web pages, we do not delve deeply into mathematics. The interested reader can find mathematical details in the PEST Book. See also a series of GMDSI-produced videos on the mathematics behind PEST and PEST++.