A few weeks ago, I read an editorial in the JBJS that made me think of the meaning of the word “evidence” and how it relates to clinical practice in 2016. In their article titled “Level-III and IV Evidence: Still Essential for the Field of Musculoskeletal Medicine and Surgery” Drs. Sangeorzan and Swiontkowski argue that the “wholesale buy-in of statistical evaluation and medical care” leads to inherent risks to patients, each of whom should be considered as an individual in order to best treat his or her medical condition. They also wisely point out the risks of misinterpreting the “evidence” and the risk of conflating the phrases “there is no evidence for treatment X” and “treatment X is ineffective.” In support of this premise, they point to a case series on results following direct suture repair of scar tissue at the site of a chronic Achilles tendon rupture. The authors were thorough in their minimum two-year follow up and assessed post treatment MRI, histological assessment and patient rated outcomes. In fact, my concerns are less about the validity of the study and more about the resulting commentary that the study “provides well-documented evidence that excision without transfer provides acceptable patient outcome.”
While the Level of Evidence ascribed to clinical research manuscripts are determined by guidelines set by journal editors and are based on standards created by the Oxford Centre for Evidence Based Medicine, I wonder, how do these technical definitions of evidence translate in the mind of a reader of said clinical research manuscript? Furthermore, what does it mean when an editorial refers to evidence as “well documented”? Case studies such as the one referenced in the aforementioned editorial are still common in the orthopedic literature. How should one use such studies to inform decision making within his or her practice?
To me (disclosure- this is not an evidence based statement), case studies do not provide evidence that the observed result should be expected in clinical practice. At best, case study provides evidence of potential therapeutic feasibility (e.g. this treatment might be promising, this treatment might be a good answer for a clinical problem, etc.) rather than evidence of documented therapeutic efficacy (e.g. this treatment has a clear positive effect on outcome, this treatment is a good answer for the average patient with a particular clinical problem, etc.). A good case series should inspire or justify further study using a better/superior study design.
If we are honest, the reason that Level 3 and 4 studies continue to commonly appear in our top journals is because it is challenging, laborious, expensive and impractical within the context of a busy clinical practice to conduct Level 1 and Level 2 studies. Inconvenience is still not a reason to accept flawed science or to extrapolate results of a low level study as evidence that the described treatment will work on patients. As mentioned by Drs. Sangeorzan and Swiontkowski, the phrase “there is no evidence to suggest,” is indeed dangerous and is commonly used to discredit important scientific hypotheses in need of further study. Nonetheless, the adoption of treatments supported by Level 3 or 4 evidence as standard of care is a far greater risk to ensuring quality care for our patients. During a time in which compensation for patient care is increasingly tied to quality, this may be the right time to reexamine how we as hand surgeons define the term “evidence” and how we determine the value of that evidence as it relates to our diagnostic and therapeutic decision-making.