<< Click to Display Table of Contents >> Why calibrate? |
![]() ![]() ![]() |
Bayes equation makes it plain that there is uncertainty before history-matching and there is uncertainty after history-matching. So why seek uniqueness when none exists? The cost of uniqueness is high. Calibration seeks the simplest parameter field that is compatible with a measurement dataset. (Actually, it seeks the parameter set that provides predictions of minimised error variance; statistically, "minimum error variance" often translates as "simplest".) However the relationship between a calibrated parameter field and reality is a complex one. SVD shows that the calibrated parameter field is the outcome of an orthogonal projection operation of the real parameter field onto a much smaller dimensional subspace. Meanwhile, regularisation theory shows that the calibrated parameter value assigned to any point in space is actually the spatial integral of the true but unknown parameter field over all space. Both of these concepts tell the same story. What we obtain from calibration is a smoothed, blurred version of hydraulic property reality. However regularisation theory also tells us that this is all that we CAN know about reality. Model predictions made using this calibrated parameter field will probably be wrong. Nevertheless their potential for wrongness has been minimised (if regularisation has been done properly). But this is not enough on which to base an important decision. A decision-maker needs to know the full range of predictive possibilities (or at least the pessimistic end of the range of predictive possibilities) before making a decision. He/she is then fully informed of what can go wrong if a certain management trajectory is followed. There are practical considerations to consider as well. Highly-parameterised inversion can be numerically expensive. It requires calculation of a Jacobian matrix during every iteration of the inversion process. Where adjoint methods are not available (as they normally are not), this matrix must be calculated using finite parameter differences. Where parameters are many, the filling of this matrix therefore requires many model runs - at least one model run for each adjustable parameter. (Note that PEST_HP provides numerical shortcuts through its randomised Jacobian functionality; this can sometimes be very effective in reducing the numerical burden of derivatives calculation, and sometimes not.) On the other hand, ensemble-based algorithms such as those provided by PESTPP-IES are far less numerically expensive, as the number of model runs required per iteration of the history-matching process is equal to the number of parameter realisations (normally a few hundred) rather than the number of parameters. The approximate, rank-deficient Jacobian matrix that is obtained in this way is often sufficient to support attainment of a high level of model-to-measurement fit. Furthermore, the outcome of the PESTPP-IES ensemble smoother process is not just one, but many parameter sets, all of which are (in theory) samples of the posterior parameter probability distribution. Bayes equation is therefore implemented directly, and at a far smaller numerical cost than calibration. So why calibrate? Why not just implement ensemble-based Bayesian analysis? |
Like most aspects of groundwater modelling, the above question does not have a universal answer - except if model run times are high and parameters number in the tens of thousands. Under these conditions ensemble methods provide the only practical history-matching alternative - whether they are used to sample the posterior parameter probability distribution or to approximate the calibration process. However, in more forgiving numerical circumstances (such as can often be achieved through problem decomposition), undertaking model calibration as a precursor to uncertainty analysis can provide insights that cannot be attained in other ways. Regularised inversion informs a modeller of the heterogeneity that MUST exist to explain a measurement dataset. In contrast, uncertainty analysis informs a modeller of heterogeneity that MAY exist that is compatible with a measurement dataset. Model calibration can therefore be seen as a form of hypothesis-testing. The hypothesis that it tests is that the conceptual basis of the numerical model is correct. If a good fit cannot be attained with a calibration dataset, or if attainment of a good fit requires the introduction of excessive heterogeneity or unrealistic parameter values, this provides grounds for contesting the conceptual model (including the prior parameter probability distribution). Anomalous parameter values may indicate the existence of subsurface conditions of which hydrogeologists were previously unaware. Alternatively, parameters may be adopting compensatory roles to accommodate model conceptual defects. In either case, revision of prior concepts is required. If this is not done, subsequent uncertainty analysis is invalidated. If the heterogeneity that MUST exist is invalid, then analysis of the heterogeneity that MAY exist is also invalid unless current concepts are revisited. The Jacobian (i.e. sensitivity) matrix that is produced as a by-product of model calibration provides fertile ground for insights into flow of information and parameter/predictive uncertainty. Linear analyses that are conducted using this matrix can yield the following: •prior and posterior parameter and predictive uncertainties; •parameter identifiabilities; •worth of existing and yet-to-be-acquired data; •contributions of individual and grouped parameters to the uncertainties of decision-critical model predictions; •insights into the effects of possible model defects; •reasons for a model's inability to simultaneously fit two different components of a calibration dataset. Where history-matching is based on a full Jacobian matrix (rather than a rank-deficient Jacobian matrix) model-to-measurement fits are probably as good as can be attained with the current model . Where these are inadequate, the conceptual basis of the model can be rightly challenged. On the other hand, where they are acceptable, the information that is thereby harvested may reveal aspects of system properties and behaviour that are important to its future management, but were hitherto unknown. Finally, the combination of a full-rank Jacobian matrix and properly formulated regularisation should ideally yield predictions which are relatively immune from bias, especially in data-rich environments where the numerical advantages of ensemble methods are somewhat diminished. In such circumstances, post-calibration uncertainty analysis (however this is accomplished) may benefit from using the calibrated parameter field (and possibly linear-estimated posterior parameter uncertainties) as its starting point. |