å

Calculating Observed Power Is Just Transforming Noise


Around April of last year, a group of surgeons published a paper in the Annals of Surgery (apparently one of the most read journals in surgical science) where they suggested that CONSORT and STROBE guidelines be modified to recommend calculations of post-hoc power, specifically observed power.

They write,


“But, as 80% power is difficult to achieve in surgical studies, we argue that the CONSORT and STROBE guidelines should be modified to include the disclosure of power—even if less than 80%—with the given sample size and effect size observed in that study.”


Some folks noticed, including Andrew Gelman, who decided to write a letter to the editor (LTE) explaining why this is a bad idea.

Gelman writes in the LTE,


“This would be a bad idea. The problem is that the (estimated) effect size observed in a study is noisy, especially so in the sorts of studies discussed by the authors. Using estimated effect size can give a terrible estimate of power, and in many cases can lead to drastic overestimates of power (thus, extreme overconfidence of the sort that is rightly deplored by Bababekov et al. in their article), with the problem becoming even worse for studies that happen to achieve statistical significance.

The problem is well known in the statistical and medical literatures; see, e.g., Lane and Dunlap (1978), Hedges (1984), Goodman and Berlin (1994), Senn (2002), and Lenth (2007). For some discussion of the systemic consequences of biased power calculations based on noisy estimates of effect size, see Button et al. (2013), and for an alternative approach to design and power analysis, see Gelman and Carlin (2014).”


“We Respectfully Disagree”


The authors replied of course, where most of their argument boils down to,

“We respectfully disagree that it is wrong to report post hoc power in the surgical literature. We fully understand that P value and post hoc power based on observed effect size are mathematically redundant; however, we would point out that being redundant is not the same as being incorrect… We also respectfully disagree that knowing the power after the fact is not useful in surgical science.”


Andrew Gelman notices and happens to blog about it, and also gets a chance to write a response to the response to the response to the original paper.

Guess now it’s over. Or well… we thought!


Zombie Power


Same group of surgeons come back with a completely new paper where they conducted a review of the surgical literature, removed all significant results, and calculated the observed power of each nonsignificant study.

They summarize their findings,


“In this analysis of the current, high-impact surgical literature, we found the median power in studies concluding insignificant differences was 0.16, with only four studies reaching the standard of 0.8.”


They also draw some interesting parallels in their discussion section to show how change can be difficult but necessary,


“The continued inability of surgical studies to reach power of 0.8 should raise the following salient questions: Are we aiming for an arbitrary standard that can rarely be achieved? Are we discrediting the academic contributions of studies that do not reach the arbitrary 0.8 threshold? And, should we reconsider the power standard in surgical science so that it is more applicable to surgery?

It is not unreasonable to challenge standards that may not be applicable. Before 1971, the voting age was arbitrarily set at 21 years old in the United States. The 26th Amendment lowered the voting age to 18 years in 1971 to acknowledge the contributions of all military servicemen and women, many of whom were under age 21 years and could not vote under earlier laws. Similarly, the surgical community should reconsider the arbitrary power threshold to recognize the contribution of studies in the surgical literature that have been arbitrarily discounted as underpowered.


The Frustration & Wrath of Statisticians


The statistical community noticed once again, and this led to a explosion on Twitter, DataMethods, and PubPeer.

I also happened to write something (found below, which I also posted on DataMethods and PubPeer) and constructed some simulations to argue why the authors are promoting something misleading.



My Response


Despite statisticians warning against the use of observed power for decades (Hoenig & Heisey, 2001), the practice continues, usually from research groups who are unaware of the problems with it. Bababekov et al. (the authors of this paper) have not only defended their use of observed power in previous papers, even when criticized by statisticians who have looked into the problem, but unfortunately, they encourage others in surgical science to do the same.

The arguments put forth in this paper have no mathematical or empirical basis. Meaning, Bababekov et al. have not given any mathematical proofs or simulated data to make their case. Their argument merely consists of,


“Although it has been argued that P value and post hoc power based on observed effect size are mathematically redundant; we assert that being redundant is not the same as being incorrect.”

“We believe this redundancy is necessary in the unique case of surgical science, as post hoc power improves the communication of results by supplementing information provided by the P value.(12) Moreover, post hoc power provides a single metric to help readers better appreciate the limitation of a study.”


The problem is that this practice is not only redundant, it is incorrect and highly misleading. This can easily be shown with simulated data (see images below). Here, I simulate \(45000\) studies from a normal distribution and where I have set a true between-group difference of d = \(0.5\).

Group A has a mean of \(25\) and standard deviation of \(10\), and group B has a mean of \(20\) and also a standard deviation of \(10\). Thus, \((25-20)/10\) gives us a standardized effect size of \(0.5\). This is the true effect size, something we would rarely know in the real world (the true effect size is unknown, we can only attempt to estimate it).

In this simulation, the studies can be categorized by nine different sample sizes, meaning they have varying true power (something we know because it’s a simulation). True power ranges all the way from \(10\)% to \(90\)% and each of the nine scenarios has samples simulated from a normal distribution.



The graph above has some random variation added between points to make them out, but in reality, this (below) is what the graph looks like without any random variation at all. (These are not lines, but actual points plotted!)



In animated form:


Notice that despite true power being known for each of these studies, observed power highly varies within these scenarios, especially for studies with low(er) power. For example, when true power is set at \(10\)%, many studies yield results showing “observed power” of \(50\)%. Therefore, if we were to follow the recommendations put forth by Bababekov et al. in several scenarios, we would mistakenly put more confidence in results that truly have low power and which yield noisy estimates. Meaning it is not reliable in “improving the communication of results”.

This is because Babakev et al. does not seem to understand long-term frequencies. Statistical power is a frequentist concept, meaning that if someone designs a study with the hope of the test correctly detecting a dimensional violation by the test hypothesis (such as the null hypothesis) \(80\)% of the time, this will occur in the long term if there is a violation of assumptions and given other assumptions.

This does not mean that the study itself has an \(80\)% probability of correctly detecting a violation. For example, when we say that a fair coin has a \(50\)% probability of getting either heads or tails, it does not mean we expect the coin to always give us results that fairly balance the distribution of heads and tails. We will not always expect alternating results of heads and tails (H T H T H T). In fact, it is very plausible that if we flip the coin 10 times, we may get all heads or all tails. However, in the long term, we expect the proportion of heads and tails to be fairly equal.

Furthermore, because the P-value is a random variable and measures random error in the estimate (Greenland, 2019), taking it (observed P) from a single study and transforming it into a distorted version of power (observed power) is beyond fallacious. The authors not only confuse observed power and true power, but they also confuse the random variable P with its realization, observed P. Observed P-values are useful because they indicate statistical compatibility between the observed data and the test hypothesis.

If they are being used for decision rules with a set alpha, and the researchers suspect that a type-II error was very likely, they can use the computed confidence / compatibility / consonance intervals to see what effect sizes are within the interval and how wide the interval estimates are (which measure random error/precision).

However, Bababekov et al. does not see this as a solution as they argue here,


“An alternative solution to mitigate risk of misinterpreting P > 0.05 is for surgical investigators to report the confidence intervals in their results along with the P value.”

“However, the general audience would likely pay more attention to the overlap of confidence intervals in a comparative study with two or more comparison groups with multiple confidence intervals, and still misinterpret the findings as demonstrating equivalency.”


The problem here is that investigators confusing overlapping intervals for equivalency is a cognitive problem (Greenland, 2017) where they do not understand what frequentist intervals are rather than a defect in the method itself.

A computed frequentist interval, such as a \(95\)% confidence / consonance interval contains all effect sizes that yield P-values greater than \(0.05\) (Cox, 2006). Of course, the computed point estimate, often the maximum likelihood estimate (MLE), has the largest P-value, and values close to it, which will have slightly smaller P-values than the MLE, indicate greater statistical compatibility than effect sizes that are further from the MLE, which will have much smaller P-values.

Thus, it is largely irrelevant whether two or more confidence/consonance intervals overlap. The useful questions are:


  • how wide are these intervals?
  • how much do they overlap?
  • what effect sizes do they span?

Better than computing a single interval estimate for a particular alpha level is to compute the function, where every interval estimate is plotted with its corresponding alpha (\(10\)%, \(15\)%, \(25\)%, \(50\)%, \(90\)%, \(95\)%, \(99\)% confidence / consonance intervals), since the the \(95\)% interval is largely arbitrary and a relic of NHST with a \(5\)% alpha.

Bababekov et al. instead sees observed power as being a solution to the “problems” they highlight with interval estimates,


“Power provides a single metric to help readers better appreciate the limitation of a study. As such, we believe reporting power is the most reasonable solution to avoid misinterpreting P > 0.05.”


This argument further highlights the problems with trying to come to conclusions from single studies. The authors wish to answer


“Was my study large enough and reliable, and if not, what single metric can I use to better understand how reliable the decision (reject/fail to reject) is?”


For this, they have found a method, one that they reify, that helps to address their uncertainties of the data-generation process, but unfortunately, it is unreliable and cannot do what they wish it does.

As others have pointed out above, the arguments put forth in this paper are fallacious and thus the manuscript itself is defective and has the potential to seriously mislead many surgical investigators who are not well versed in statistical methods. Letters to the editor (of which there are numerous) and corrections are simply not enough. A retraction is necessary when a manuscript is this defective.


Script to Reproduce Simulations


Also, the R code to reproduce the simulations:


First, we load the necessary R packages to do so.


library("pwr")
library("ggplot2")

#' @title Simulate Retrospective Power
#' @param N
#' @param MeanA
#' @param MeanB
#' @param SD
#' @param pwr
#' @return
#' @export
#' @examples

falsepower <- function(N, MeanA, MeanB, SD, pwr) {

# Generates random values from a normal distribution.

x <- rnorm(N, MeanA, SD)  
y <- rnorm(N, MeanB, SD)

# runs a statistical test between the two groups above and
#  extracts the observed P-values and stores in an object

pvalues <- t.test(x, y)$p.value

# Computes the standard deviations of both groups and pools them and stores it

pooledsd <- sqrt((((N - 1)) * (sd(x)^2) +
                    ((N - 1) * (sd(y)^2)))
                 / ((N * 2) - 2))

# Calculate the standardized, observed effect size

obses <- abs(mean(x) - mean(y)) / pooledsd

# Calculates the observed power using the observed effect size

obspower <- pwr.t.test(N, obses, sig.level = 0.05, type = "two.sample",
                       alternative = "two.sided")$power
df <- data.frame(pvalues, obspower)
df$pvalues <- round(df$pvalues, digits = 2)
df$obspower <- round((df$obspower * 100), digits = 2)
df$truepower <- pwr

return(df)
}

# Here, we produce 9 different scenarios where know the
#  the assumed power and calculate observed

# Edit the values here if you wish to set
#  different means, standard deviations, or sample sizes

df10 <- do.call("rbind", replicate(5000, (falsepower(N = 5,
               MeanA = 25, MeanB = 20, SD = 10,
               pwr = "10% True Power (N = 10)")),
               simplify = FALSE))
df20 <- do.call("rbind", replicate(5000, (falsepower(N = 11,
               MeanA = 25, MeanB = 20, SD = 10,
               pwr ="20% True Power (N = 22)")),
               simplify = FALSE))
df30 <- do.call("rbind", replicate(5000, (falsepower(N = 18,
               MeanA = 25, MeanB = 20, SD = 10,
               pwr = "30% True Power (N = 36)")),
               simplify = FALSE))
df40 <- do.call("rbind", replicate(5000, (falsepower(N = 25,
               MeanA = 25, MeanB = 20, SD = 10,
               pwr = "40% True Power (N = 50)")),
               simplify = FALSE))
df50 <- do.call("rbind", replicate(5000, (falsepower(N = 32,
               MeanA = 25, MeanB = 20, SD = 10,
               pwr = "50% True Power (N = 64)")),
               simplify = FALSE))
df60 <- do.call("rbind", replicate(5000, (falsepower(N = 41,
               MeanA = 25, MeanB = 20, SD = 10,
               pwr = "60% True Power (N = 82)")),
               simplify = FALSE))
df70 <- do.call("rbind", replicate(5000, (falsepower(N = 51,
               MeanA = 25, MeanB = 20, SD = 10,
               pwr = "70% True Power (N = 102)")),
               simplify = FALSE))
df80 <- do.call("rbind", replicate(5000, (falsepower(N = 64,
               MeanA = 25, MeanB = 20, SD = 10,
               pwr = "80% True Power (N = 128)")),
               simplify = FALSE))
df90 <- do.call("rbind", replicate(5000, (falsepower(N = 86,
               MeanA = 25, MeanB = 20, SD = 10,
               pwr = "90% True Power (N = 172)")),
               simplify = FALSE))

# Combine the dataframes

df <- rbind(df10, df20, df30, df40, df50, df60, df70, df80, df90)

# Graph the data with facets

ggplot(data = df, mapping = aes(x = pvalues, y = obspower)) +
  geom_jitter(aes(fill = truepower), alpha = 0.2,
              shape = 21, stroke = 0.7, size = 2) +
  annotate("rect", xmin = 0.05, xmax = 1,
           ymin = 0, ymax = 50, color = "#990000",
           alpha = .005, size = 0.3) +
  annotate("pointrange", x = 0.05, y = 50, ymin = 0, ymax = 100,
           colour = "#990000", alpha = 0.8, size = .5) +
  annotate("segment", x = 0, xend = 0.05, y = 50, yend = 50,
           colour = "#990000") +
  annotate(geom = "text", x = 0.5, y = 58,
           label = "Nonsignificant Results Infrequently \nExceed 50% Observed Power",
           size = 3.5, color = "#000000") +
  annotate(geom = "text", x = 0.5, y = 26,
           label = "Nonsignificant Results", size = 5, color = "#000000") +
  annotate(geom = "text", x = 0.5, y = 75,
           label = "Nonsignificant Results", size = 5, color = "#000000") +
  facet_wrap( ~ truepower, ncol = 3) +
  labs(title = "Relationship Between P-values, Observed Power, & True Power",
       subtitle = "Data from 45,000 Simulated Studies with an Effect Size of
       d = 0.5 (No Random Variation Between Points)",
       x = "Observed P-Value",
       y = "Observed Power (%)",
       fill = "True Power") +
  theme_light() +
  theme(legend.position = "none",
        axis.title.x = element_text(size = 14, vjust = -2),
        axis.text.x = element_text(color = '#000000'),
        axis.title.y = element_text(size = 14),
        strip.background = element_rect(fill = "#dddddd" ),
        strip.text = element_text(color = 'black', size = 12)) +
  scale_x_continuous(breaks = seq(0, 1, .10), expand = c(0, 0)) +
  scale_y_continuous(breaks = seq(0, 100, 25), expand = c(0, 0)) +
  theme(text = element_text(color = "#000000")) +
  theme(plot.title = element_text(size = 16),
        plot.subtitle = element_text(size = 14))

Environment


The analyses were run on


#> R version 4.3.2 (2023-10-31)
#> Platform: aarch64-apple-darwin20 (64-bit)
#> Running under: macOS Sonoma 14.3
#> 
#> Matrix products: default
#> BLAS:   /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRblas.0.dylib 
#> LAPACK: /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRlapack.dylib;  LAPACK version 3.11.0
#> 
#> Random number generation:
#>  RNG:     Mersenne-Twister 
#>  Normal:  Inversion 
#>  Sample:  Rejection 
#>  
#> locale:
#> [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
#> 
#> time zone: America/New_York
#> tzcode source: internal
#> 
#> attached base packages:
#>  [1] splines   grid      stats4    parallel  stats     graphics  grDevices utils     datasets  methods   base     
#> 
#> other attached packages:
#>   [1] pwr_1.3-0               plotly_4.10.3           TSstudio_0.1.7          tfautograph_0.3.2       tfdatasets_2.9.0       
#>   [6] keras_2.13.0            tensorflow_2.14.0       timetk_2.9.0            modeltime_1.2.8         workflowsets_1.0.1     
#>  [11] workflows_1.1.3         tune_1.1.2              rsample_1.2.0           recipes_1.0.9           parsnip_1.1.1          
#>  [16] modeldata_1.2.0         infer_1.0.5             dials_1.2.0             scales_1.3.0            tidymodels_1.1.1       
#>  [21] xgboost_1.7.6.1         prophet_1.0             rlang_1.1.2             astsa_2.0               bayesforecast_1.0.1    
#>  [26] smooth_4.0.0            greybox_2.0.0           glmnet_4.1-8            forecast_8.21.1         fable_0.3.3            
#>  [31] fabletools_0.3.4        tsibbledata_0.4.1       tsibble_1.1.3           rstanarm_2.26.1         texPreview_2.0.0       
#>  [36] tinytex_0.49            rmarkdown_2.25          brms_2.20.4             bootImpute_1.2.1        miceMNAR_1.0.2         
#>  [41] knitr_1.45              boot_1.3-28.1           JuliaCall_0.17.5        reshape2_1.4.4          ProfileLikelihood_1.3  
#>  [46] ImputeRobust_1.3-1      gamlss_5.4-20           gamlss.dist_6.1-1       gamlss.data_6.0-2       mvtnorm_1.2-4          
#>  [51] performance_0.10.8      summarytools_1.0.1      tidybayes_3.0.6         htmltools_0.5.7         Statamarkdown_0.9.2    
#>  [56] car_3.1-2               carData_3.0-5           qqplotr_0.0.6           ggcorrplot_0.1.4.1      mitml_0.4-5            
#>  [61] pbmcapply_1.5.1         Amelia_1.8.1            Rcpp_1.0.11             blogdown_1.18.1         doParallel_1.0.17      
#>  [66] iterators_1.0.14        foreach_1.5.2           lattice_0.22-5          bayesplot_1.10.0        wesanderson_0.3.7      
#>  [71] VIM_6.2.2               colorspace_2.1-0        here_1.0.1              progress_1.2.3          loo_2.6.0              
#>  [76] mi_1.1                  Matrix_1.6-4            broom_1.0.5             yardstick_1.2.0         svglite_2.1.3          
#>  [81] Cairo_1.6-2             cowplot_1.1.2           mgcv_1.9-1              nlme_3.1-164            xfun_0.41              
#>  [86] broom.mixed_0.2.9.4     reticulate_1.34.0       kableExtra_1.3.4        posterior_1.5.0         checkmate_2.3.1        
#>  [91] parallelly_1.36.0       miceFast_0.8.2          ggmice_0.1.0            randomForest_4.7-1.1    missForest_1.5         
#>  [96] miceadds_3.16-18        mice.mcerror_0.0.0-9000 quantreg_5.97           SparseM_1.81            MCMCpack_1.6-3         
#>  [ reached getOption("max.print") -- omitted 27 entries ]
#> 
#> loaded via a namespace (and not attached):
#>   [1] svUnit_1.0.6           xts_0.13.1             DBI_1.2.0              bslib_0.6.1            httr_1.4.7            
#>   [6] gfonts_0.2.0           pan_1.9                prettyunits_1.2.0      gsl_2.1-8              tfruns_1.5.1          
#>  [11] TTR_0.24.4             survMisc_0.5.6         pillar_1.9.0           extremevalues_2.3.3    R6_2.5.1              
#>  [16] mime_0.12              gridtext_0.1.5         benchmarkmeData_1.0.4  pspline_1.0-19         polspline_1.1.24      
#>  [21] ggpubr_0.6.0           rprojroot_2.0.4        KMsurv_0.1-5           ADGofTest_0.3          caTools_1.18.2        
#>  [26] fansi_1.0.6            e1071_1.7-14           remotes_2.4.2.1        anytime_0.3.9          survminer_0.4.9       
#>  [31] rpart_4.1.23           metadat_1.2-0          inline_0.3.19          tidyselect_1.2.0       jomo_2.7-6            
#>  [36] utf8_1.2.4             psych_2.3.12           sessioninfo_1.2.2      gridExtra_2.3          fs_1.6.3              
#>  [41] rvest_1.0.3            ipred_0.9-14           rapportools_1.1        mathjaxr_1.6-0         uuid_1.1-1            
#>  [46] doRNG_1.8.6            flextable_0.9.4        stabledist_0.7-1       furrr_0.3.1            lazyeval_0.2.2        
#>  [51] sass_0.4.8             munsell_0.5.0          sampleSelection_1.2-12 lava_1.7.3             profvis_0.3.8         
#>  [56] pracma_2.4.4           V8_4.4.1               bitops_1.0-7           svgPanZoom_0.3.4       promises_1.2.1        
#>  [61] copula_1.1-3           shape_1.4.6            zoo_1.8-12             rcartocolor_2.1.1      distr_2.9.2           
#>  [66] GJRM_0.2-6.4           multcomp_1.4-25        assertthat_0.2.1       tools_4.3.2            processx_3.8.3        
#>  [71] insight_0.19.7         shiny_1.8.0            lhs_1.1.6              generics_0.1.3         markdown_1.12         
#>  [76] shinythemes_1.2.0      evaluate_0.23          devtools_2.4.5         fracdiff_1.5-2         arrayhelpers_1.1-0    
#>  [81] ellipsis_0.3.2         data.table_1.14.10     withr_2.5.2            twosamples_2.0.1       xtable_1.8-4          
#>  [86] plyr_1.8.9             itertools_0.1-3        lme4_1.1-35.1          MatrixModels_0.5-3     systemfonts_1.0.5     
#>  [91] arm_1.13-1             details_0.3.0          httpuv_1.6.13          robustbase_0.99-1      officer_0.6.3         
#>  [96] GPfit_1.0-8            sandwich_3.1-0         vctrs_0.6.5            lifecycle_1.0.4        memuse_4.2-3          
#>  [ reached getOption("max.print") -- omitted 143 entries ]


Help support the website!






See also:

comments powered by Disqus
Article Progress Tracker