Common concerns and objections to quantitative expression of uncertainty

Many concerns and objections to quantitative expression of uncertainty have been raised by various parties during the public consultation and trial period for this document and in the literature.

Many, not all, relate to the role of expert judgement in quantifying uncertainty.

The EFSA Scientific Committee considered these concerns carefully and concluded that all of them can be addressed, either by improved explanation of the principles involved or through the use of appropriate methods for obtaining and using quantitative expressions.

most of the options in the Guidance do not require complex computations, and the methods are scalable to any time and resource limitation, including urgent situations.

uncertainty can be quantified by expert judgement for any well-defined question or quantity, provided there is at least some relevant evidence.

the Guidance recommends use of relevant data where available.

All judgement is subjective, and judgement is a necessary part of all scientific assessment. Even when good data are available, expert judgement is involved in evaluating and analysing them, and when using them in risk assessment.

all judgements in EFSA assessments will be based on evidence and reasoning, which will be documented transparently.

EFSA’s guidance on uncertainty analysis and expert knowledge elicitation use methods designed to counter those biases.

EFSA’s methods produce judgements that reflect the experts’ uncertainty – if they feel they are over-precise, they should adjust them accordingly.

identify your reasons for thinking the uncertainty is exaggerated, and revise your judgements to take them into account.

whenever experts draw conclusions, they are necessarily making judgements about all the uncertainties they are aware of. The Guidance provides methods for assessing uncertainties collectively that increase the rigour and transparency of those judgements.

take the uncertainty of your judgement into account as part of the judgement, e.g. by giving a range, or making it wider.

the quantiles will not be treated as precise, but as a step in deriving a distribution for you to review and adjust. If there is concern about the choice of distribution, its impact on the analysis can be assessed by sensitivity analysis. Alternatively, approximate probabilities could be used.

in principle, probability judgements can be given for all well-defined questions or quantities. However, the Guidance recognises that experts may be unable to make probability judgements for some uncertainties, and provides options for dealing with this.

this is expected and inevitable, whether the judgements are quantitative or not. An advantage of quantitative expression is that those differences are made explicit and can be discussed, leading to better conclusions. These points apply to experts working on the same assessment, and also to different assessments of the same question by different experts or institutions.

no model is entirely correct. Model uncertainty is better expressed by making a probability judgement for how different the model result might be from the real value.

choosing a conservative assumption involves two judgements – the probability that the assumption is valid, and the acceptability of that probability. The Guidance improves the rigour and transparency of the first judgement, providing a better basis for the second (which is part of risk management).

Probability judgements can be made for any well-defined conclusion, and all EFSA conclusions should be well-defined.

no such judgements are implied. All scientific advice is conditional on assumptions about unknown unknowns.

this is the Knightian view. The Guidance uses subjective probability, which Knight recognised as an option.

provided an answer to a question is well-defined, a probability judgement can be made for it without specifying or knowing all possible alternative answers. However, assessors should guard against a tendency to underestimate the probability of other answers when they are not differentiated.

specify a range that does.

if there really is no evidence, no probability judgement can be made – and no scientific conclusion can be drawn. Another way to see it, is that if the experts can make a conclusion (that is not inconclusive), they should also be able to express their level of certainty about it.

there is a well-established theoretical basis for using probability calculations to combine probability judgements elicited from experts (including probability judgements informed by non Bayesian statistical analysis) with probabilities obtained from Bayesian statistical analysis of data.

reconsider both the uncertainty analysis and the conclusion, and revise one or both so they (a) match and (b) properly represent what the science supports. A justifiable conclusion takes account of uncertainty, so there should be no inconsistency.

‘safe’ implies some acceptable level of certainty, so if that is defined then positive or negative conclusion may be given without qualification.

actually many do, and as a matter of principle, decision-makers need information on uncertainty to make rational decisions.

some evidence supports this, but other evidence suggests communicating uncertainty can increase confidence. EFSA’s approach on communicating uncertainty is designed to achieve the latter.