Advertisement
News
Advertisement

Statistical Errors in Medical Studies(2)

Wed, 03/17/2010 - 8:29am
Curious Cat Science and Engineering Blog

I have written about statistics, and various traps people often fall into when examining data before ( Statistics Insights for Scientists and Engineers, Data Can’t Lie – But People Can be Fooled, Correlation is Not Causation, Simpson’s Paradox). And also have posted about reasons for systemic reasons for medical studies presenting misleading results ( Why Most Published Research Findings Are False, How to Deal with False Research Findings, Medical Study Integrity (or Lack Thereof), Surprising New Diabetes Data). This post collects some discussion on the topic from several blogs and studies.

HIV Vaccines, p values, and Proof by David Rind

if vaccine were no better than placebo we would expect to see a difference as large or larger than the one seen in this trial only 4 in 100 times. This is distinctly different from saying that there is a 96% chance that this result is correct, which is how many people wrongly interpret such a p value.

So, the modestly positive result found in the trial must be weighed against our prior belief that such a vaccine would fail. Had the vaccine been dramatically protective, giving us much stronger evidence of efficacy, our prior doubts would be more likely to give way in the face of high quality evidence of benefit.

While the actual analysis the investigators decided to make primary would be completely appropriate had it been specified up front, it now suffers under the concern of showing marginal significance after three bites at the statistical apple; these three bites have to adversely affect our belief in the importance of that p value. And, it’s not so obvious why they would have reported this result rather than excluding those 7 patients from the per protocol analysis and making that the primary analysis; there might have been yet a fourth analysis that could have been reported had it shown that all important p value below 0.05.

How to Avoid Commonly Encountered Limitations of Published Clinical Trials by Sanjay Kaul, MD and and George A. Diamond, MD

Trials often employ composite end points that, although they enable assessment of nonfatal events and improve trial efficiency and statistical precision, entail a number of shortcomings that can potentially undermine the scientific validity of the conclusions drawn from these trials. Finally, clinical trials often employ extensive subgroup analysis. However, lack of attention to proper methods can lead to chance findings that might misinform research and result in suboptimal practice.

Why Most Published Research Findings Are False by John P. A. Ioannidis

There is increasing concern that most current published research findings are false…

a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.

A finding from a well-conducted, adequately powered randomized controlled trial starting with a 50% pre-study chance that the intervention is effective is eventually true about 85% of the time.

We’re so good at medical studies that most of them are wrong by John Timmer

In the end, Young noted, by the time you reach 61 tests, there’s a 95 percent chance that you’ll get a significant result at random. And, let’s face itresearchers want to see a significant result, so there’s a strong, unintentional bias towards trying different tests until something pops out.

even the same factor can be accounted for using different mathematical means. The models also make decisions on how best handle things like measuring exposures or health outcomes. The net result is that two models can be fed an identical dataset, and still produce a different answer.

Odds are, it’s wrong by Tom Siegfried

Ioannidis claimed to prove that more than half of published findings are false, but his analysis came under fire for statistical shortcomings of its own. “It may be true, but he didnt prove it,” says biostatistician Steven Goodman of the Johns Hopkins University School of Public Health. On the other hand, says Goodman, the basic message stands. “There are more false claims made in the medical literature than anybody appreciates,” he says. “There’s no question about that.

“Determining the best treatment for a particular patient is fundamentally different from determining which treatment is best on average,” physicians David Kent and Rodney Hayward wrote in American Scientist in 2007. “Reporting a single number gives the misleading impression that the treatment-effect is a property of the drug rather than of the interaction between the drug and the complex risk-benefit profile of a particular group of patients.”

Related: Bigger Impact: 15 to 18 mpg or 50 to 100 mpg?Meaningful debates need clear information Seeing Patterns Where None Exists Fooled by Randomness Poor Reporting and Unfounded Implications Illusion of Explanatory Depth Mistakes in Experimental Design and Interpretation

SOURCE

Topics

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading