3 Ways to Spot a Low-Quality Study

5 min read. Posted in Other
Written by Steve Kamper info

In the research field, not all studies are created equal! A reader needs to know how to spot a low-quality study because a low-quality study will provide information that should not be relied heavily upon to guide clinical decisions. The problem is that the peer-review and journal editorial process do not do the job of weeding out uninformative research, so some makes its way into publication.

A low-quality study is one where the findings are uninterpretable, very likely to be in error, or at very high risk of bias. Letโ€™s expand on each of these factors.

 

1. Interpretability

Interpretability is the most important consideration โ€“ if the findings are uninterpretable then then you do not need to assess the likelihood of error or appraise the risk of bias.

A clear question is non-negotiable (1). If you cannot concisely state the question in your own words, in a way that is clear and makes sense to you, then the paper is not worth reading. You should also be able to categorize the question according to whether it is: a) descriptive – aims to illustrate a situation or concept; b) predictive – aims to forecast something about the future with information in the present; or c) causal – aims to quantify the influence of one variable on another (2).

A study may also be uninterpretable if the methods (design or analysis) do not align with the question. For example, a randomized controlled trial is not designed to answer a question of prevalence. A multivariable regression analysis that adjusts for confounders is only useful for answering a causal question. A pilot or feasibility study is not designed to estimate clinical effectiveness. A study that describes a prediction rule is not designed to identify treatment targets. Finally, a qualitative analysis does not answer questions of effectiveness.

Poor interpretability may also be due to imprecise results, demonstrated by wide confidence intervals around effect estimates, means or proportions. If the range of plausible effects in the results is very wide then there is no way to conclude that the effect is harmful, meaningless, or important (3).

 

2. Error

Errors occur when researchers make mistakes in data analysis or presentation. These types of errors can be very difficult for a reader to pick up. The best advice is to be on the lookout for results that are glaringly different to those in previous studies on similar questions. For example, it is reasonable to interpret with caution the results of a study that shows a large effect for an intervention that typically shows small or no effects in a particular population in other studies.

 

image

3. Bias

Bias means that the results in a study are systematically different to what happens in the population (4). Usually, it means that the study effect estimate is larger than what you could expect in real life. There are a whole host of biases that might impact study results, and different types of studies are at risk of different types of bias (5).

For studies on treatment effectiveness or other causal questions, confounding bias is important; this is effectively dealt with via randomization in the former (6). Selection bias is also important โ€“ a study should detail how participants came into the study (recruitment methods), who was and was not eligible (inclusion/exclusion criteria), and what were the demographic and clinical characteristics of the sample. This information is necessary to determine the extent to which the results apply to your patients (7).

 

What else?

Two key points to keep in mind when determining the extent to which a study should influence clinical decisions:

  1. A reader can never know whether there is error or bias, only judge risk of error/bias, and these concepts are continuous rather than dichotomous. All studies are biased and there is always error, but what is important is the judgement call of how large the risk is, which then informs how much confidence a reader should place in the results.
  2. Information from research does not exist in a vacuum. A clinical decision involves integrating information from various sources in addition to research, including clinical experience, anatomical and physiological knowledge, previous training, discussions with colleagues, etc. These pieces of information are at risk of bias too. The challenge is to assess all the relevant pieces of information for risk of bias and synthesize them, placing greatest weight on the information at lowest risk of bias.

 

Conclusion

Being able to spot a low-quality study is important because a clinician needs to judge how much confidence to place in the results, in the context of all the other available information to make clinical decisions. Doing evidence-based practice well is hard because there can be a temptation towards blanket acceptance or dismissal of study results, but this neglects the fact that study quality is a continuum, not a dichotomy.

 

Want to easily stay on top of the research that SHOULD inform your clinical practice?

Learn how we can help below๐Ÿ‘‡

๐Ÿ“š Stay on the cutting edge of physio research!

๐Ÿ“† Every month our team of experts break down clinically relevant research into five-minute summaries that you can immediately apply in the clinic.

๐Ÿ™๐Ÿป Try our Research Reviews for free now for 7 days!

 

preview image

References

Donโ€™t forget to share this blog!

Leave a comment

If you have a question, suggestion or a link to some related research, share below!

You must be logged in to post or like a comment.

Elevate Your Physio Knowledge Every Month!

Get free blogs, infographics, research reviews, podcasts & more.

By entering your email, you agree to receive emails from Physio Network who will send emails according to their privacy policy.