When Can We Say That Something Doesn’t Work?

People don’t want to waste their time on things that don’t work. To avoid wasting time, many may want to assess the scientific evidence. They may first look at the basic science (if it can be studied at such a level) and ask, “does this thing have a clear molecular/biological mechanism,” or they may ask, “does it have a theoretical foundation?” Next, the person may look at the human evidence (if there is any) and ask if it worked in a clinical trial or epidemiological data. Read More

P-Values Are Tough And S-Values Can Help

The P-value doesn’t have many fans. There are those who don’t understand it, often treating it as a measure it’s not, whether that’s a posterior probability, the probability of getting results due to chance alone, or some other bizarre/incorrect interpretation. [1–3] Then there are those who dislike it for reasons such as believing that the concept is too difficult to understand or because they see it as a noisy statistic that provides something we’re not interested in. Read More

Analysis Issues In That New Low-Carb/LDL Study

Recently, a randomized trial that investigated the impact of a low-carbohydrate diet on plasma low density lipoprotein cholesterol (LDL-C) in young and healthy adults was published. The study was done in Norway between 2011 and the end of 2012. A total of 30 participants completed the study, where they were either randomized to a low-carbohydrate group (<20 grams/d) or a control group. Basically, the investigators found a difference between the groups, Read More

Misplaced Confidence in Observed Power

Two months ago, a study came out in JAMA which compared the effectiveness of the antidepressant escitalopram to placebo for long-term major adverse cardiac events (MACE). The authors explained in the methods section of their paper how they calculated their sample size and what differences they were looking for between groups. First, they used some previously published data to get an idea for incidence rates, “Because previous studies in this field have shown conflicting results, there was no appropriate reference for power calculation within the designated sample size. Read More

Misuse of Standard Error in Clinical Trials

Reporting effect sizes with their accompanying standard errors are necessary because it lets the reader interpret the magnitude of the treatment effect and the amount of uncertainty in that estimate. It is magnitudes better than not providing any effect sizes at all and only focusing on statements of statistical significance. Although many authors provide standard errors with the intention of relaying the amount of uncertainty in the model, there are several misconceptions about when the standard error should be reported, and it is often misused. Read More