When Can We Say That Something Doesn’t Work?

People don’t want to waste their time on things that don’t work. To avoid wasting time, many may want to assess the scientific evidence. They may first look at the basic science (if it can be studied at such a level) and ask, “does this thing have a clear molecular/biological mechanism,” or they may ask, “does it have a theoretical foundation?” Next, the person may look at the human evidence (if there is any) and ask if it worked in a clinical trial or epidemiological data. Read More

Book Review: Fisher, Neyman, and the Creation of Classical Statistics

Erich Lehmann’s last book, which was published after his death, is on the history of classical statistics and its creators. Specifically, how his mentor Jerzy Neyman and his adversary Ronald Fisher helped lay the foundations for the methods that are used today in several fields. This post is intended to be a general review/summary of the book, which I recommend to everyone and anyone who is interested in statistics and science. Read More
fisher  math  power 

Misplaced Confidence in Observed Power

Two months ago, a study came out in JAMA which compared the effectiveness of the antidepressant escitalopram to placebo for long-term major adverse cardiac events (MACE). The authors explained in the methods section of their paper how they calculated their sample size and what differences they were looking for between groups. First, they used some previously published data to get an idea for incidence rates, “Because previous studies in this field have shown conflicting results, there was no appropriate reference for power calculation within the designated sample size. Read More

High Statistical Power Can Be Deceiving

Even though many researchers are now acquainted with what power is and why we try to aim for high power in studies, there are still several misconceptions about statistical power floating around. For example, if a study designed for 95% power fails to find a difference between two groups, does that offer more support for the null hypothesis? Many will answer yes, because they elicit that if such a large study failed to find a difference between two groups, then this provides evidence for no effect. Read More