A recent publication on clinical trial results arising from industry-sponsored head-to-head studies has concluded that problems continue to persist with industry-funded studies. The analysis, led by Dr. John Ioannidis at the Departments of Medicine and Health Research and Policy, Stanford Prevention Research Center, Stanford University showed that the literature on head-to-head randomized controlled trials (RCTs) is dominated by industry and that “industry-sponsored comparative assessments systematically yield favorable results for the sponsors.” In addition, Dr. Ioannidis argued that industry-sponsored studies tend to employ non-inferiority/equivalence designs more frequently than other trials and that when industry sponsorship and non-inferiority/equivalence designs coexist, then almost all trials report the desirable “favorable” results.
The research analysis, published in the February 2015 issue of the Journal of Clinical Epidemiology (http://dx.doi.org/10.1016/j.jclinepi.2014.12.016), aimed to map the status of head-to-head comparative randomized evidence and to assess if the source of clinical trial funding had any impact on either the design of the trial or the reported results. The researchers assembled a 50% random sample of RCTs published in journals and indexed in PubMed during 2011. As larger trials tend to have more impact on influencing medical practice, the researchers excluded trials with less than 100 participants and additionally focused their review on studies “evaluating the efficacy and safety of drugs, biologics, or medical devices in which two or more interventions were directly compared”. From 20,088 potentially relevant reports, 6,526 RCTs were identified of which 498 were deemed eligible head-to-head comparisons (7.6%). Following exclusion of RCTs with less than 100 participants, 319 head-to-head studies were included in the analysis. In summary, the review showed that industry-sponsored trials were larger in size, used more frequently non-inferiority/equivalence designs, and were more likely to have favorable results (superiority or non-inferiority/equivalence for the experimental treatment) than non-industry-sponsored trials. Industry funding [odds ratio (OR) 2.8; 95% confidence interval (CI): 1.6, 4.7] and non-inferiority/equivalence designs (OR 3.2; 95% CI: 1.5, 6.6), appeared to strongly associate with favorable findings. Fifty-five of the 57 (96.5%) industry-funded non-inferiority/equivalence trials presented desirable favorable results.
While most seasoned readers of the medical literature may be aware of the potential for conflict-of-interest arising from industry sponsored studies, fixing the issue can be a somewhat slow and laborious process with multiple stakeholders to satisfy. In their discussion of the trends arising from industry-sponsored trials, Dr. Ioannidis and research colleagues commented that, “there is strong dominance of the industry in the influential agenda of head-to-head comparisons, confirming the unbalance between profit and nonprofit sponsored sources of data of current literature.” In order to remedy the publication bias seen in such analyses the research group suggest that, “consideration should be given to allowing the conduct of more large trials of comparative effectiveness and safety under the control of nonprofit entities. The design of such trials should be such as to inform important questions rather than pre-emptively ensure that results would be favorable for a tested intervention.” The thorny question however remains in respect of who should pay for such studies – a “tax” on global pharma profits or funding from government sources?