Everyone knows you can’t prove a negative, but can you prove an uncertainty? First, I’d better explain why you would want to try to do something so seemingly perverse. What I am trying to do is to show that the precautionary principal requires that science does something that is not normally called upon to do; which is to demonstrate our ignorance and uncertainty rather than knowledge.
The Precautionary principal is, roughly, the proposition that we should not wait for full scientific certainty before intervening to prevent a harm. One example would be the rule from the Third North Sea Conference that usage of toxic chemicals (such as antifouling paint) at sea should be assumed to harm the biosphere. The person wishing to use toxic paint bears the burden of proof that his actions won’t be harmful.
However, no-one would argue that any action should be regulated merely based on an unsupported accusation. So there must be another, earlier burden of proof which triggers the shifting of the full burden of proof to the person wishing to act, from the person wishing to prevent action. But by the definition of the precautionary principle, this early burden cannot be based on full scientific certainty. Instead, it requires us to show sufficient uncertainty that an action is harmless (and sufficiently bad potential consequences) that precaution should be applied. In other words, we have to prove, or at least show, an uncertainty.
It’s not entirely settled that the precautionary principle is a good thing. For example, the American Council on Science and Health argue here that myopically focussing on potential harm, without taking into account benefits can have adverse consequences. They give the example of a discontinuation of water chlorination in Peru which led to a Cholera epidemic. Nevertheless, even those holding relatively radical positions , such as the ‘proactionary principle’ against precaution don’t rule out that ‘precautionary measures’ may need to be taken sometimes.
So this leaves us needing to make decisions – if only provisionally – based on incomplete or unreliable information. But this is not something which science normally does, or what the public expects and trusts it to do. So far I have been using the common usage of the word ‘uncertainty’. Wynne divides uncertainty into four classes: risk, a known probability of harm based on full scientific knowledge; uncertainty, where you know all the moving parts but not the probabilities, ignorance; where there is something that you don’t know that you don’t know, and indeterminacy, where there is something which you cannot know. Of these, you would most likely make a precautionary case on uncertainty or indeterminacy (In the case or risk the case is not precautionary, and in the case of ignorance you don’t know you need to make it).
How would you show that it is uncertain that something is harmless? An example might be that of the TBT antifouling paint in Santillo et al. They are pessimistic that good evidence can usually be found, especially for low levels of pollutant: TBT was unusual in that a specific marker of its effects was found. This suggests that for chemicals shown to be toxic at high concentrations, it should be assumed that they are toxic at low ones.
One problem with making decisions without complete scientific knowledge is that the natural biases which scientific procedures have helped us control are likely to recur. Kahneman and Tversky give various biases which people exhibit when dealing with uncertain information. Apparently people use a ‘Representativeness heuristic’ when judging how likely something is. In their example, we might think an acquaintance is likely to be a librarian because he is like librarians we have known. But this heuristic fails when the absolute number of the guessed class is small. Unfortunately experts, even those with knowledge of statistics, are prone these kinds of errors. So, going back to the case of the paint, we expect that a chemical toxic at high levels will be toxic at low levels, because all chemicals toxic at low levels will be toxic at high levels. But maybe there are very few chemicals toxic at low levels. I’m not saying that this is a conclusive argument in this case, but that these kind of biases are applicable to the judgements experts need to make.
A more positive picture is given by a study by Krynski and Tenenbaum . They note that statistical methods apply when we have lots of data about a simple system, but humans are adapted to make quick judgements about a complex world. They find that some apparent biases can be justified by appealing to causal models, but that people have a ‘bias towards deterministic mechanisms’. If scientists are going to address these kinds of issues, some mechanism for checking for these kinds of bias seems to be required.
Even if we can find the most rational way for experts to make these judgements, there is the problem of whether the public will find them legitimate. If people expect scientists to produce clear knowledge, will they accept a case built on its lack?