Clinical

On Evidence-Based Medicine

A few weeks ago, I read an editorial in the JBJS that made me think of the meaning of the word “evidence” and how it relates to clinical practice in 2016. In their article titled “Level-III and IV Evidence: Still Essential for the Field of Musculoskeletal Medicine and Surgery” Drs. Sangeorzan and Swiontkowski argue that the “wholesale buy-in of statistical evaluation and medical care” leads to inherent risks to patients, each of whom should be considered as an individual in order to best treat his or her medical condition. They also wisely point out the risks of misinterpreting the “evidence” and the risk of conflating the phrases “there is no evidence for treatment X” and “treatment X is ineffective.” In support of this premise, they point to a case series on results following direct suture repair of scar tissue at the site of a chronic Achilles tendon rupture. The authors were thorough in their minimum two-year follow up and assessed post treatment MRI, histological assessment and patient rated outcomes. In fact, my concerns are less about the validity of the study and more about the resulting commentary that the study “provides well-documented evidence that excision without transfer provides acceptable patient outcome.”

While the Level of Evidence ascribed to clinical research manuscripts are determined by guidelines set by journal editors and are based on standards created by the Oxford Centre for Evidence Based Medicine, I wonder, how do these technical definitions of evidence translate in the mind of a reader of said clinical research manuscript? Furthermore, what does it mean when an editorial refers to evidence as “well documented”? Case studies such as the one referenced in the aforementioned editorial are still common in the orthopedic literature. How should one use such studies to inform decision making within his or her practice?

To me (disclosure- this is not an evidence based statement), case studies do not provide evidence that the observed result should be expected in clinical practice. At best, case study provides evidence of potential therapeutic feasibility (e.g. this treatment might be promising, this treatment might be a good answer for a clinical problem, etc.) rather than evidence of documented therapeutic efficacy (e.g. this treatment has a clear positive effect on outcome, this treatment is a good answer for the average patient with a particular clinical problem, etc.). A good case series should inspire or justify further study using a better/superior study design.

If we are honest, the reason that Level 3 and 4 studies continue to commonly appear in our top journals is because it is challenging, laborious, expensive and impractical within the context of a busy clinical practice to conduct Level 1 and Level 2 studies. Inconvenience is still not a reason to accept flawed science or to extrapolate results of a low level study as evidence that the described treatment will work on patients. As mentioned by Drs. Sangeorzan and Swiontkowski, the phrase “there is no evidence to suggest,” is indeed dangerous and is commonly used to discredit important scientific hypotheses in need of further study. Nonetheless, the adoption of treatments supported by Level 3 or 4 evidence as standard of care is a far greater risk to ensuring quality care for our patients. During a time in which compensation for patient care is increasingly tied to quality, this may be the right time to reexamine how we as hand surgeons define the term “evidence” and how we determine the value of that evidence as it relates to our diagnostic and therapeutic decision-making.

Article written by:

Dr. Osei is an Assistant Professor of Orthopedic Surgery at Washington University in St. Louis. He likes to spend his weekdays doing microvascular reconstruction and thinking about clinical epidemiology. He likes to spend his time away from work hanging out with his family and cultivating his lifelong obsession with Champions League soccer.

Join the discussion

  1. Noah Raizman

    Dan – a timely article with well-taken points. Our literature, as rich as it is, consists mostly of Level IV evidence (and whether you consider a retrospective comparative trial a Level II, III or IV study is currently in the eye of the beholder, an issue that comes up every year as several ASSH Committees review paper submissions to our meetings). This is changing slowly and subtly with time, but the points you make about the cost, effort and barriers associated with randomized, blinded, controlled trials stand to be reiterated: without industry funding and its inherent spectre of bias or without increasingly sparse government or institutional funding, conducting an RCT is nearly impossible. We all have strong reasons to both push for better evidence, and to not throw out the baby with the bath water.

    As much as I would like to use Level III and Level IV evidence only as proof of principle and to guide further research, it is all we have to guide us clinically when it comes to such everyday questions like the classic PRC vs midcarpal fusion for SLAC wrist debate; nerve transfers versus tendon transfers and one form of tendon transfer versus another; implant selection for most hand fractures; and many others, to say nothing of treating less common conditions.

    As I am about to sit for my CAQ exam next week, going over old self-assessment exams, I am shocked by how few of the answers are guided by Level I evidence, though perhaps not surprised.

    I would note that the Oxford CEBM specifically states that the Levels of Evidence are to be used as an heuristic for interpretation but not as a gauge of quality, importance or clinical value. As limited as Level IV studies may be, they are still the dim lanterns that guide us down some dark roads.

Leave a Reply

Your email address will not be published. Required fields are marked *