Overall similarity and consistency assessment scores are not sufficiently accurate for predicting discrepancy between direct and indirect comparison estimates

J Clin Epidemiol. 2013 Feb;66(2):184-91. doi: 10.1016/j.jclinepi.2012.06.022. Epub 2012 Nov 24.

Abstract

Objectives: Indirect comparison methods have been increasingly used to assess the effectiveness of different interventions comparatively. This study evaluated a Trial Similarity and Evidence Consistency Assessment (TSECA) framework for assessing key assumptions underlying the validity of indirect comparisons.

Study design and setting: We applied the TSECA framework to 94 Cochrane Systematic Reviews that provided data to compare two interventions by both direct and indirect comparisons. Using the TSECA framework, two reviewers independently assessed and scored trial similarity and evidence consistency. A detailed case study provided further insight into the usefulness and limitations of the framework proposed.

Results: Trial similarity and evidence consistency scores obtained using the assessment framework were not associated with statistically significant inconsistency between direct and indirect estimates. The case study illustrated that the assessment framework could be used to identify potentially important differences in participants, interventions, and outcome measures between different sets of trials in the indirect comparison.

Conclusion: Although the overall trial similarity and evidence consistency scores are unlikely to be sufficiently accurate for predicting inconsistency between direct and indirect estimates, the assessment framework proposed in this study can be a useful tool for identifying between-trial differences that may threaten the validity of indirect treatment comparisons.

Publication types

  • Comparative Study
  • Meta-Analysis
  • Research Support, Non-U.S. Gov't
  • Review

MeSH terms

  • Bias
  • Evidence-Based Medicine / standards*
  • Guideline Adherence / standards*
  • Humans
  • Practice Guidelines as Topic / standards*
  • Predictive Value of Tests
  • Quality Assurance, Health Care
  • Randomized Controlled Trials as Topic / standards*
  • Reproducibility of Results
  • Sensitivity and Specificity