What questions should you ask when you hear a claim based on data? | David Spiegelhalter and Anthony Masters

We get bombarded with statistics every day. How can you spot a bad number? The Art of Statistics, written by David Chivers, and the lists of questions given by Tim, Dave, and Tom, are both related to the same essentials. The short list we use ourselves.

How trustworthy is the source of the story? Are they reliable, honest and competent, or are they trying to manipulate my emotions? Are they giving the whole picture or just some bits to suit themselves? The review that found insects had declined only looked for studies that showed a decline.

How reliable is the statistics? Covid cases are those who have symptoms and decide to have a test, and are becoming distinct from infections. Survey estimates seek to generalise from a sample to a target population, but have both margins of error and possible systematic biases, while model outputs reflect assumptions and selected scenarios. The quality of the evidence should be communicated with humility.

How trustworthy is a claim made on the basis of data? Context is all; a 100% increased risk may make the headlines, but twice not very much is still not very much. Historical statistics can help determine if something is a big number. Observational analyses can have biases or linger confounders, so differences may be attributed to cause. ONS found that vaccination increased the chance of someone being Omicron rather than Delta, but this led to the claim that vaccination increased the risk of testing positive.

Statistics help people make decisions. When we see claims from statistical evidence, we need to think.

The Winton Centre for Risk and Evidence Communication is chaired by David Spiegelhalter. The Royal Statistical Society has a statistical ambassador.