3 Ways to Spot a Low-Quality Study
In the research field, not all studies are created equal! A reader needs to know how to spot a low-quality study because a low-quality study will provide information that should not be relied heavily upon to guide clinical decisions. The problem is that the peer-review and journal editorial process do not do the job of weeding out uninformative research, so some makes its way into publication.
A low-quality study is one where the findings are uninterpretable, very likely to be in error, or at very high risk of bias. Let’s expand on each of these factors.
Interpretability is the most important consideration – if the findings are uninterpretable then then you do not need to assess the likelihood of error or appraise the risk of bias.
A clear question is non-negotiable (1). If you cannot concisely state the question in your own words, in a way that is clear and makes sense to you, then the paper is not worth reading. You should also be able to categorize the question according to whether it is: a) descriptive – aims to illustrate a situation or concept; b) predictive – aims to forecast something about the future with information in the present; or c) causal – aims to quantify the influence of one variable on another (2).
A study may also be uninterpretable if the methods (design or analysis) do not align with the question. For example, a randomized controlled trial is not designed to answer a question of prevalence. A multivariable regression analysis that adjusts for confounders is only useful for answering a causal question. A pilot or feasibility study is not designed to estimate clinical effectiveness. A study that describes a prediction rule is not designed to identify treatment targets. Finally, a qualitative analysis does not answer questions of effectiveness.
Poor interpretability may also be due to imprecise results, demonstrated by wide confidence intervals around effect estimates, means or proportions. If the range of plausible effects in the results is very wide then there is no way to conclude that the effect is harmful, meaningless, or important (3).
Errors occur when researchers make mistakes in data analysis or presentation. These types of errors can be very difficult for a reader to pick up. The best advice is to be on the lookout for results that are glaringly different to those in previous studies on similar questions. For example, it is reasonable to interpret with caution the results of a study that shows a large effect for an intervention that typically shows small or no effects in a particular population in other studies.
Bias means that the results in a study are systematically different to what happens in the population (4). Usually, it means that the study effect estimate is larger than what you could expect in real life. There are a whole host of biases that might impact study results, and different types of studies are at risk of different types of bias (5).
For studies on treatment effectiveness or other causal questions, confounding bias is important; this is effectively dealt with via randomization in the former (6). Selection bias is also important – a study should detail how participants came into the study (recruitment methods), who was and was not eligible (inclusion/exclusion criteria), and what were the demographic and clinical characteristics of the sample. This information is necessary to determine the extent to which the results apply to your patients (7).
Two key points to keep in mind when determining the extent to which a study should influence clinical decisions:
- A reader can never know whether there is error or bias, only judge risk of error/bias, and these concepts are continuous rather than dichotomous. All studies are biased and there is always error, but what is important is the judgement call of how large the risk is, which then informs how much confidence a reader should place in the results.
- Information from research does not exist in a vacuum. A clinical decision involves integrating information from various sources in addition to research, including clinical experience, anatomical and physiological knowledge, previous training, discussions with colleagues, etc. These pieces of information are at risk of bias too. The challenge is to assess all the relevant pieces of information for risk of bias and synthesize them, placing greatest weight on the information at lowest risk of bias.
Being able to spot a low-quality study is important because a clinician needs to judge how much confidence to place in the results, in the context of all the other available information to make clinical decisions. Doing evidence-based practice well is hard because there can be a temptation towards blanket acceptance or dismissal of study results, but this neglects the fact that study quality is a continuum, not a dichotomy.
Want to easily stay on top of the research that SHOULD inform your clinical practice?
Learn how we can help below👇
📚 Stay on the cutting edge of physio research!
📆 Every month our team of experts break down clinically relevant research into five-minute summaries that you can immediately apply in the clinic.
🙏🏻 Try our Research Reviews for free now for 7 days!
- Kamper SJ. Asking a question: Linking evidence with practice. J Orthop Sports Phys Ther2018;48(7):596-597
- Kamper SJ. Types of Research Questions: Descriptive, Predictive, or Causal. J Orthop Sports Phys Ther2020;50(8): 468-69
- Kamper SJ. Confidence intervals: Linking evidence with practice. J Orthop Sports Phys Ther2019;49(10): 763-74
- Kamper SJ. Bias: Linking evidence with practice. J Orthop Sports Phys Ther2018;48(8):667-68.
- Kamper SJ. Risk of bias and study quality assessment: Linking evidence with practice. J Orthop Sports Phys Ther2020;50(5): 277-79
- Kamper SJ. Randomization: Linking evidence with practice. J Orthop Sports Phys Ther2018;48(9):730-31
- Kamper SJ. Generalizability: Linking evidence with practice. J Orthop Sports Phys Ther2020;50(1): 45-46
Don’t forget to share this blog!
Get updates when we post new blogs.
Subscribe to our newsletter now!