<< Click to Display Table of Contents >> Metrics |
![]() ![]() ![]() |
Essential to the credibility of decision-support numerical modelling is implementation of the scientific method. Only when this is respected can decision-makers claim that their decisions are based on science. But what is the scientific method? Philosophers differ somewhat on a concise definition of the scientific method. However there is one thing on which they all agree. They all agree that conclusions must emerge from the application of logic to reproducible measurements. Logic itself can be subdivided into inductive and deductive logic. Bayesian methods are often associated with inductive logic, while hypothesis-testing is often associated with deductive logic. However other scientific philosophers stress the importance of abduction. This describes "lightbulb moments" that scientists may experience when the solution to a scientific riddle comes to them after pondering the facts for long enough (like when Sherlock Holmes suddenly realises who committed a dastardly crime). Decision-support modelling can draw inspiration from all of these aspects of the scientific method. All of them have in common: •the overriding importance of data; •the extraction of information from data; •presentation of that information to human beings in ways that allow conclusions to be drawn; •recognition that current information may not support an unambiguous conclusion. Most importantly, implementation of the scientific method must be a collegiate activity where information is uncontaminated by an individual's values, and where members of the scientific community can learn from the insights of other members regardless of their backgrounds. This does not mean that science cannot serve human values. However separation of the two is essential while engaging in scientific deliberation and communication. |
It follows that the scientific credibility of decision-support modelling requires that it be undertaken with an open mind. Furthermore, it requires that decision-support modelling has the ability to harvest information from data, and that a modeller is prepared to be illuminated by what the data exposes through abductive reasoning. While this may sound obvious, it is often at odds with the way in which many big, expensive models are built. They are often based on conceptual models that are uncertain. They often express details that are beyond the capacity of anyone to know. Their long run times and numerical idiosyncrasies often make them difficult to history-match. Failure to replicate historical measurements of system behaviour is often written off as "prevention of over-fitting", rather than being recognised as limited ability to harvest information that is resident in this behaviour. This approach to decision-support modelling is based on the premise that numerical simulation must provide human beings with a "picture perfect" representation of the subsurface that they can then use as a basis for managing the subsurface. The importance of producing the picture over-rides the importance of this picture being correct. Picture perfection is never possible. However, harvesting of information from data is always possible. Decision-support modelling must facilitate its harvesting. It must also facilitate its interpretation, even if interpretation is not as straightforward as looking at a picture. Decision-support modelling must create fertile ground for abductive inference and lighbulb moments as it gives a voice to information that is resident in data. By now, the importance of history-matching in the decision-support modelling process should be obvious. It should also be obvious that simulation on its own cannot support decisions. Simulators must be used in conjunction with equally sophisticated packages such as PEST and PEST++ if decision-support modelling is to implement the scientific method on the one hand, and achieve its decision-support potential on the other hand. |
We are all familiar with Karl Popper's depiction of the scientific method as one of hypothesis-testing. An hypothesis can never be accepted; it can only be rejected. An hypothesis is rejected if it is demonstrably incompatible with the behaviour of a system, and/or with the known or inferred properties of that system. Because scientific hypotheses can be rejected but not accepted, scientific knowledge can be considered to be "asymmetric". The making of any decision comes with a ready-made hypothesis. It is the hypothesis that something will go wrong if a certain course of management action is adopted. Ideally, this hypothesis can be rejected before the decision is made to adopt that management strategy. If it cannot be rejected, then decision-makers must be fully aware that things may not go according to plan. Ideally, exposure of the possibility of management failure also exposes mechanisms for management failure. This can support the design of a monitoring strategy that warns of the possibility of incipient failure long before failure actually occurs. We can now propose a metric for decision-support modelling. It is not a metric for success. In harmony with the "asymmetry of knowledge" that characterises the Popperian view of science, it is a metric for failure. Decision-support modelling fails when an hypothesis of management failure is improperly rejected. The repercussions of this are obvious. To avoid failure, decision-support modelling must purposefully look for ways in which things can go wrong. At the very least, it must ascribe uncertainties to decision-critical model predictions. Furthermore, if uncertainties cannot be "exact" (whatever "exact" means when applied to natural systems) they must be inflated rather than deflated. There must be no unpleasant surprises once a new management regime is implemented. |
There is one sure way to prevent decision-support modelling failure, where "failure" is defined as above. This is to inflate the uncertainties of decision-critical model predictions, and/or to employ unnecessarily broad "engineering safety margins". Uncertainty intervals are unnecessarily broad when information that is resident in data remains unharvested. (What is "information" after all? It is that which reduces uncertainty.) This information may reside in direct measurements of system properties. It may reside in concepts pertaining to sources of water, and to geological constraints on water movement that underpin a numerical model. Just as importantly, information may reside in measurements of system behaviour - its present behaviour and its past behaviour. Modelling harvests this information by replicating this behaviour. This information thereby constrains at least some model parameters. In doing this, it constrains predictions that are sensitive to these parameters. Decision-support modelling must therefore recognise uncertainty. At the same time it must reduce uncertainty, to the extent that available data allows, or to the extent that is necessary for a decision-critical management hypothesis to be rejected. So we can now introduce a second decision-support modelling metric. Decision-support modelling is useless when predictive uncertainty intervals are unnecessarily broad. |
The task of decision-support modelling is to: •Establish the uncertainties of predictions of management interest; •Reduce these uncertainties where possible through harvesting of information; •Allow decision-makers to fully explore the predictive consequences of information insufficiency; •Suggest data acquisition strategies through which the uncertainties of decision-critical model predictions can be reduced either now, or in the future when management conditions are altered. What does this have to do with integrity of simulation? A little but not a lot. As will be discussed, a model's parameters provide receptacles for information that it harvests. These receptacles require a certain degree of simulation integrity to have integrity themselves. But there is a big difference. Integrity of simulation is an impossible goal. Integrity of information storage and conveyance is definitely NOT an impossible goal. It is the latter that must be pursued through decision-support modelling. |