This article was originally posted on RealClearScience.

One of the big problems in science journalism is the tendency to hype scientific research. You’re familiar with the routine: A new study comes out on, say, how coffee might lead to a slight increase in a particular disease. Then, plastered all over the front pages of websites and newspapers are headlines like, “Too Much Coffee Will Kill You!” Of course, the following week, a different study will report that coffee might protect you from another disease, and the media hysteria plays out all over again, just in the opposite direction.

This is bad. Poor science journalism misleads the public and policymakers. Is there a way to prevent such hype?

Yes, say three researchers in the latest issue of the journal Nature. They give 20 tips and concepts that readers should keep in mind when trying to properly analyze the claims made in a scientific paper:

1. Variation happens. Everything is always changing. Sometimes the reason is really interesting, and other times it’s nothing more than chance. Often, there are multiple causes for any particular effect. Thus, determining the underlying reason for variation is often quite difficult.

2. Measurements aren’t perfect. Two people using the exact same ruler will likely give slightly different measures for the length of a table.

3. Research is often biased. Bias can either be intentional or unintentional. Usually, it’s the latter. If an experiment is designed poorly, the results can be skewed in one direction. For example, if a voter poll accidentally samples more Republicans than Democrats, then the result will not accurately reflect national opinion. Another example: Clinical trials that are not conducted using a “double blind” format can be subject to bias.

4. When it comes to sample size, bigger is better. Less is more? Please. More is more.

5. Correlation does not mean causation. The authors say that correlation does not imply causation. Yes, it does. It is more accurate to say, “Correlation does not necessarily imply causation” because the relationship might actually be a causal one. Still, always be on the lookout for alternate explanations, which often take the form of a “third variable” or “confounder.” A famous example is the correlation between coffee and pancreatic cancer. In reality, some coffee drinkers also smoke, and smoking is a cause of pancreatic cancer, not drinking coffee.

6. Beware regression to the mean. Scientific studies sometimes mistakenly exaggerate a particular result. These outliers are exposed when subsequent research demonstrates a more moderate result. A drug that cures 50% of patients often fails to achieve similar rates of success in follow-up studies.

7. Beware data extrapolation. Just because you got a 2% raise last year and a 3% raise this year doesn’t mean you’ll get a 4% raise next year and a 5% raise the year after that. Sorry.

8. Mind the predictive value of tests. A positive test result does not necessarily mean you have that disease. The reason is because tests aren’t perfect, and each one has a different predictive value. The predictive value of any test is based on the sensitivity and specificity of the test, the nitty gritty details of which you can read about here.

9. Control groups are essential. An experiment without a control group isn’t an experiment.

10. Experimental subjects should be randomized. Randomization helps eliminate self-selection and other biases. A patient should not be allowed to say, “I would like to receive the actual treatment, not the placebo, please.”

11. Look for replications. Has the result of a study been properly replicated in a different population, preferably by another group of researchers?

12. Scientists aren’t perfect. They have biases. They make mistakes. They cherry-pick. In other words, scientists are human.

13. Mind your statistics. Data must be “statistically significant,” which means that it is unlikely to have occurred merely by chance. The most common standard used is 0.05, which means that scientists accept a 5% chance that their results were simply due to luck. A new paper, however, suggests that the threshold should be lowered to 0.005.

14. There’s a difference between “no effect” and “not significant.” If a result does not meet the 0.05 threshold, that doesn’t necessarily mean “nothing happened.” Sometimes, the effect is real, but the sample size wasn’t large enough to detect it using statistics. The best example of this is climate change: Many people wrongly believe that there was no global warming in the 15-year-period spanning 1995-2009. But, the planet indeed kept warming up; the data just wasn’t statistically significant.

15. There’s a difference between “significant” and “important.” On the flip side, just because a result is statistically significant doesn’t mean it is important. If chemical X doubles your risk of disease from 1 in a million to 2 in a million, that’s not an effect worth worrying about.

16. Results may not be generalizable. A psychology study examining the sexual fantasies of 18-year-old college freshmen may not be generalizable to the entire human race.

17. People are terrible at risk perception. Many people are fearful of GMOs and nuclear power plants, yet those are both far, far safer than automobiles — which kill more than 30,000 Americans every year.

18. Don’t always assume independent events. The probability of any given event is often independent of another event — but not always. For example, the probability of you wearing a yellow shirt is independent of your probability of getting hit by a bus in broad daylight. However, the probability of you getting hit by a bus changes dramatically if you are wearing a black shirt and wandering around in the middle of the street at night.

19. Beware of cherry-picked data. An experiment with no hypothesis can find almost anything, especially in the era of Big Data. For example, it is now possible to compare thousands of DNA sequences between different people, and just by sheer luck, some differences between people (e.g., those with Alzheimer’s and those without) may be statistically significant. However, if scientists are just “fishing” without knowing what they’re looking for, the results they report can be misleading.

20. Beware of extreme data. Any jawdropping statistic (school X has the highest percentage of child geniuses!) could have several different explanations, not just the one of which the authors are trying to convince you.

The authors have put together a very good list of tips, but I would like to add one more: Extraordinary claims require extraordinary evidence!

Source: William J. Sutherland, David Spiegelhalter and Mark A. Burgman. “Twenty Tips for Interpreting Scientific Claims.” Nature 503: 335-337. 21-Nov-2013. doi:10.1038/503335a