Bias Exposed--raw data versus journal articles

Home | Bias Exposed--raw data versus journal articles | NEW YORK TIMES ARTICLE BASED ON NEJM ARTICLE ABOVE | QUALITY OF HEALTH CARE IN U.S.--W.H.O. STUDY | Canada system compared to U.S. | BMJ on Market Forces and US Health Care | NEJM article on PhARMA and doctors published in 1961 | WEB STATS | Motives and goals | What to take and future topics

Since the review committees of our medical journals do not receive the raw data, the marketing departments of pharmaceutical companies and their paid researchers can and do manipulate the data so as to freely distort the results in a positive way the articles they submit for publication.  The article below is a review of the raw, obtained from the Freedom of Information Act, and a positive bias average 32%.  Raw data is not presented with the articles submitted to journals.  The other distortion is to not publish studies with negative results.  Business is about profits.

There is a fundamental conflict of interest for the journals because their principle source of revenue is from the advertisers who submit the articles.  Moreover, every significant medical journal has been purchased by a small group of corporations—50 years ago journals were affiliated with major medical schools. Thus there was strong incentive not to look too critically at what has been submitted.  For a in-depth accounting of how PhARMA manipulates the product of drug information read Professor Marcia Angell’s book The Truth About Drug Companies: How They Deceive US and What to Do About It--jk.


Positive bias from 11% to 69%:  a study compares raw data submitted to the FDA with journal based on that data.


The study compares the data raw data supplied the FDA to the selected data published in journal articles. As expected, because the studies were done as part of an overall strategy to market drugs, the results were manipulated for that end.  In all 37 studies the positive bias was 11% to 69%--average 32%.  In other words every journal article based on clinical trials of psychiatric medications overstates substantially their positive results.  There is every reason to believe that this is the normal for all areas of research and published articles, for the the issues are the same (lack of meaningful review and maximization of profits)--jk.


Volume 358:252-260, January 17, 2008,  Number 3

Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy

Erick H. Turner, M.D., Annette M. Matthews, M.D., Eftihia Linardatos, B.S., Robert A. Tell, L.C.S.W., and Robert Rosenthal, Ph.D.



Background Evidence-based medicine is valuable to the extent that the evidence base is complete and unbiased. Selective publication of clinical trials — and the outcomes within those trials — can lead to unrealistic estimates of drug effectiveness and alter the apparent risk–benefit ratio.

Methods We obtained reviews from the Food and Drug Administration (FDA) for studies of 12 antidepressant agents involving 12,564 patients. We conducted a systematic literature search to identify matching publications. For trials that were reported in the literature, we compared the published outcomes with the FDA outcomes. We also compared the effect size derived from the published reports with the effect size derived from the entire FDA data set.

Results Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 study viewed as positive was not published. Studies viewed by the FDA as having negative or questionable results were, with 3 exceptions, either not published (22 studies) or published in a way that, in our opinion, conveyed a positive outcome (11 studies). According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall.

Conclusions We cannot determine whether the bias observed resulted from a failure to submit manuscripts on the part of authors and sponsors, from decisions by journal editors and reviewers not to publish, or both. Selective reporting of clinical trial results may have adverse consequences for researchers, study participants, health care professionals, and patients.

Full text at


Neuroleptic drugs don't work

Pieces from the full text

Data from FDA Reviews

We identified the phase 2 and 3 clinical-trial programs for 12 antidepressant agents approved by the FDA between 1987 and 2004 (median, August 1996), involving 12,564 adult patients. For the eight older antidepressants, we obtained hard copies of statistical and medical reviews from colleagues who had procured them through the Freedom of Information Act.  Reviews for the four newer antidepressants were available on the FDA Web site. This study was approved by the Research and Development Committee of the Portland Veterans Affairs Medical Center; because of its nature, informed consent from individual patients was not required.  From the FDA reviews of submitted clinical trials, we extracted efficacy data on all randomized, double-blind, placebo-controlled studies of drugs for the short-term treatment of depression. We included data pertaining only to dosages later approved as safe and effective; data pertaining to unapproved dosages were excluded.

Previous studies have examined the risk–benefit ratio for drugs after combining data from regulatory authorities with data published in journals.3,30,31,32 We built on this approach by comparing study-level data from the FDA with matched data from journal articles. This comparative approach allowed us to quantify the effect of selective publication on apparent drug efficacy.


Qualitative Description of Selective Reporting within Trials

The methods reported in 11 journal articles appear to depart from the pre-specified methods reflected in the FDA reviews (Table B of the Supplementary Appendix). Although for each of these studies the finding with respect to the protocol-specified primary outcome was non-significant, each publication highlighted a positive result as if it were the primary outcome. The non-significant results for the pre-specified primary outcomes were either subordinated to non-primary positive results (in two reports) or omitted (in nine). (Study-level methodological differences are detailed in the footnotes to Table B of the Supplementary Appendix.)


For each of the 12 drugs, the effect size derived from the journal articles exceeded the effect size derived from the FDA reviews (sign test, P<0.001) (Figure 3B). The magnitude of the increases in effect size between the FDA reviews and the published reports ranged from 11 to 69%, with a median increase of 32%. A 32% increase was also observed in the weighted mean effect size for all drugs combined, from 0.31 (95% CI, 0.27 to 0.35) to 0.41 (95% CI, 0.36 to 0.45).


We found a bias toward the publication of positive results. Not only were positive results more likely to be published, but studies that were not positive, in our opinion, were often published in a way that conveyed a positive outcome. We analyzed these data in terms of the proportion of positive studies and in terms of the effect size associated with drug treatment. 


From CL Psy

A commentary on the results of the study was published on January 17,2008, by Clinical Psychology and Psychiatry at  Excerpts therefrom:


A whopper of a study has just appeared in the New England Journal of Medicine. It tracked each study antidepressant submitted to the FDA, comparing the results as seen by the FDA in comparison with the data published in the medical literature. The FDA uses raw data from the submitting drug companies for each study. This makes great sense, as the FDA statisticians can then compare their analyses to the analyses from drug companies, in order to make sure that the drug companies were analyzing their data accurately.


After studies are submitted to the FDA, drug companies then have the option of submitting data from their trials for publication in medical journals. Unlike the FDA, journals are not checking raw data. Thus, it is possible that drug companies could selectively report their data. An example of selective data reporting would be to assess depression using four measures. Suppose that two of the four measures yield statistically significant results in favor of the drug. In such a case, it is possible that the two measures that did not show an advantage for the drug would simply not be reported when the paper was submitted for publication. This is called "burying data," "data suppression," "selective reporting," or other less euphemistic terms. In this example, the reader of the final report in the journal would assume that the drug was highly effective because it was superior to placebo on two of two depression measures, left completely unaware that on two other measures the drug had no advantage over a sugar pill. Sadly, we know from prior research that data are often suppressed in such a manner. In less severe cases, one might just switch the emphasis placed on various outcome measures. If a measure shows a positive result, allocate a lot of text to discussing that result and barely mention the negative results.  From an amoral, purely financial view, there is no reason to publish negative trial results.  The NJE article stated “For each drug, the effect-size value based on published literature was higher than the effect-size value based on FDA data, with increases ranging from 11 to 69%.”



The drugs that were found to have increased their effects as a result of selective publication and/or data manipulation:

                            Bupropion (Wellbutrin)

                            Citalopram (Celexa)

                            Duloxetine (Cymbalta)

                            Escitalopram (Lexapro)

                            Fluoxetine (Prozac)

                            Mirtazapine (Remeron)

                            Nefazodone (Serzone)

                            Paroxetine (Paxil)

                            Sertraline (Zoloft)

                            Venlafaxine (Effexor)

That is every single drug approved by the FDA for depression between 1987 and 2004. Just a few of many tales of data suppression and/or spinning can be found below:

                            Data reported on only 1 of 15 participants in an Abilify study

                            Data hidden for about 10 years on a negative Zoloft for PTSD study

                            Suicide attempts vanishing from a Prozac study

                            Long delay in reporting negative results from an Effexor for youth depression study

                            Data from Abilify study spun in dizzying fashion. Proverbial lipstick on a pig.

                            A trove of questionable practices involving a key opinion leader

                            Corcept heavily spins its negative antidepressant trial results



Another article based on the NEJM study at 



Study: Antidepressants useless for most

February 26, 2008 — 7:59am ET

Here's a study guaranteed to put almost every drugmaker on the defensive. Researchers analyzed every antidepressant study they could get their hands on--including a bunch of unpublished data obtained via the U.S. Freedom of Information Act--and concluded that, for most patients, SSRI antidepressants are no better than sugar pills. Only the most severely depressed get much real benefit from the drugs, the study found.

The new paper, published today in the journal PLoS Medicine, breaks new ground, according to The Guardian, because the researchers got access for the first time to an apparently full set of trial data for four antidepressants: Prozac (fluoxetine), Paxil (paroxetine), Effexor (venlafaxine), and Serzone (nefazodone). And the data said..."the overall effect of new-generation antidepressant medication is below recommended criteria for clinical significance." Ouch.

The study could have a ripple effect, affecting prescribing guidelines and even prompting questions about how drugs are approved. "This study raises serious issues that need to be addressed surrounding drug licensing and how drug trial data is reported," one of the researchers said. In other words, all trial data needs to be made public.