How to read a research study

We’re in an age of information overload– it’s everywhere! While the free and open exchange of information is a wonderful thing, it also requires a set of skills in evaluating trustworthiness. This is especially true when using information to make decisions about health: the stakes are high, and the internet is the wild west.

Here’s how to understand what’s behind the headline. When the results of scientific studies are discussed in the popular press (or on a podcast, reddit thread or instagram post), it’s usually out of context and often overblown. If it sounds too good to be true, it’s time to start asking questions. This means finding the original study. You can almost always get the abstract for free, and this is a great place to start. The full article is super if you’re ready for a deep dive and ok with technical language, but you can find most of the key information from the abstract. Check PubMed! You can search for the article’s title, the author, the journal, or the PMID or DOI if you have them.

Now that you have the abstract, it’s time to ask questions. Start with:

  • Who were the participants? Sometimes what’s reported is actually an animal study– useful for advancing the science, but definitely not ready for prime time. If it was on humans, which ones? Surprisingly often, it will be something like seven male German college students. How similar are you to the study sample? (I’ll save my soapbox about lack of gender representation in research for another day, but keep this in mind, especially in sports and performance literature). Then, consider how many? The larger and more diverse the sample, the more compelling the result. Are there data from 1,000+ people, with different ages, races, and genders? Now I’m pretty interested.

  • Then think about what the study measured– what the outcomes were, and whether they matter. Is the outcome relevant to health, performance, well-being, or something else important, or is it something really only meaningful in a lab? Sometimes researchers will isolate a variable (like a hormone level) and then make a big conceptual leap to make claims about the importance of the finding to health. If you can’t draw a straight line from the variable that was measured to the claim made in the conclusion (or the headline of the article you found the study cited in in the first place), get out your salt shaker and take a few grains for the road.

  • Then, think about whether the outcomes reported are meaningful in real life. Study outcomes are generally reported in terms of statistical significance, which is not the same as clinical significance. A 0.1-point change on a 10-point scale could be statistically significant, but if your pain went from 8.9 to 8.8, who cares? Also beware the abuse of p-values: this is a statistical way of explaining how likely it is that something happened due to chance vs. due to the intervention the researchers are testing. In general, a p-value of <.05 is considered “statistically significant”. This doesn’t mean it’s clinically, or real-world, significant. Also “trended toward significance” doesn’t mean anything.

  • I also suggest looking at how the study was funded. While industry-funded studies aren’t necessarily wrong, you might take the findings for what they are and use common sense and caution. When a study as designed to get a certain result, it’s less reliable, and even “independent” scientists can be biased, even if they aren’t doing it on purpose. Studies funded by the NIH or other government or university bodies are generally less likely to suffer from conflict of interest. But don’t forget that the published literature as a whole is biased towards reporting positive findings– studies that didn’t find an effect are much less likely to be published (this is called publication bias, or the file-drawer problem). The upshot here is that just because a study was published that seems to demonstrate that something works, that doesn’t mean that all the evidence agrees, whether it’s easily found or not.

Another thing to chew on when you’re looking at research: it’s important to distinguish between absence of evidence and evidence of absence. In the former, the topic hasn’t been studied (or studied much). We can’t draw conclusions because we simply don’t have the data. In the latter, studies have shown no effect, relationship, or whatever you’re looking at. People make all kinds of errors with this distinction– and sometimes folks use it to try to obscure the truth, too. So read carefully, and ask yourself: absence of evidence, or evidence of absence?

There’s a lot more to be said about evaluating research studies (there are entire college courses on this subject, after all). But for most people, the questions outlined above are enough to get a good sense of how reliable and how important a study’s findings are. If you’re considering making a change to your health practices based on something you read, remember to use common sense, be skeptical, and always talk it over with a pro who knows you!

Previous
Previous

Tips for the everyday athlete

Next
Next

The placebo effect and you