<< Click to Display Table of Contents >> Problem decomposition |
![]() ![]() ![]() |
This section has touched on many important concepts. These include (among other things) the scientific method, modelling as inquiry, Bayes equation, decision-support modelling metrics, appropriate model structural and parameterisation complexity, predictive bias, and insights gained through pessimism. All of these things must be considered when undertaking decision-support modelling. There is one thing for sure, however. There is no such thing as "purity" when it comes to decision-support modelling. Choices will be subjective, but they can still be rooted in logic, and based on informed common sense. In the end, modelling supports the decision-making process if it can guarantee that: •it has exposed all mechanisms of possible management failure; •it has made best use of all available data, so that predictions are not unduly pessimistic; •it has provided the basis for design of a monitoring network that provides early warning of unwanted management outcomes. All of these goals are attainable if decision-support modelling is tuned to the harvesting of management-salient information from data. This, rather than "simulation perfection" is its goal. A simulator hosts parameters. Parameters host information. Information reduces uncertainty, while sometimes challenging current site concepts and prior probability distributions. Openness to information is the key to decision-support modelling, as it is the key to science. |
The answer is obvious. Start from the problem and work backwards. The problem is groundwater management. Decompose this problem into parts that can be subjected to scientific analysis. In doing this, pay as much attention to flow of information as to flow of water. Then perform the analysis. Decomposition of the problem of groundwater management in this way allows identification of one or a number of "bad things" that may happen if a certain course of management action is adopted. Each of them comprises an hypothesis. An hypothesis can be rejected if it is demonstrably incompatible with any of the following: •How the environmental system behaves. (Its behaviour is encapsulated in a simulator.) •The disposition and nature of background and anomalous hydraulic properties to which the occurrence of a bad thing may be sensitive. (This is encapsulated in the prior parameter probability distribution.) •The observed behaviour of the system. (This is encapsulated in the history-matching dataset.) It is apparent, that simulation, if undertaken as a means of data assimilation, is well suited to testing decision-pertinent hypotheses. Groundwater modelling can therefore become a scientific instrument. However this does not happen as a by-product of the quest for simulation perfection . It happens if simulation is part of a prediction-specific, information-harvesting strategy that involves both a simulator and PEST/PEST++. Pursuit of this strategy requires that system properties and processes that contribute most to the possible occurrence of a specific bad thing be both represented and adjustable (if possible) in ways that can express and reduce uncertainty, and in ways that do not incur predictive bias. This requires trade-offs. Bias can never be completely eliminated. Furthermore, the potential for bias increases as quantified uncertainty is reduced. (This is a "golden rule" of data processing.) Both simple models and complex models can incur their own forms of bias. Hence there is no recipe for ubiquitous decision-support modelling success. The best strategy depends on the hypothesis that is being tested. Where multiple "bad thing hypotheses" must be tested, multiple decision-support models may be required. |