å

P-Values Are Tough and S-Values Can Help


The \(P\)-value doesn’t have many fans. There are those who don’t understand it, often treating it as a measure it’s not, whether that’s a posterior probability, the probability of getting results due to chance alone, or some other bizarre/incorrect interpretation.13 Then there are those who dislike it because they think the concept is too difficult to understand or because they see it as a noisy statistic we’re not interested in.

However, the groups of people mentioned above aren’t mutually exclusive. Many who dislike and criticize the \(P\)-value also do not understand its properties and behavior. This is unfortunate, given how important and widely used they are. In this article, which could also have been titled, \(P\)-values: More Than You Ever Wanted to Know, I take on the task of explaining:


  • what \(P\)-values are

  • the assumptions behind them

  • their properties and behavior

  • different schools of interpretation

  • misleading criticisms of \(P\)-values

  • some valid issues in interpretation

  • how these issues can be resolved

What is a P-value Anyway?


Some Definitions & Descriptions


The \(P\)-value is the probability of getting a result (specifically, a test statistic) at least as extreme as what was observed if every model assumption, in addition to the targeted test hypothesis (usually a null hypothesis), used to compute it were correct.35

A simple, mathematically rigorous definition of a \(P\)-value (for those interested) is given by Stark (2015).


Let \(P\) be the probability distribution of the data \(X\), which takes values in the measurable space \(\mathcal{X}\). Let \(\left\{R_{\alpha}\right\}_{\alpha \in[0,1]}\) be a collection of \(P\) -measurable subsets of \(\mathcal{X}\) such that (1) \(P\left(R_{\alpha}\right)=\alpha\) and (2) If \(\alpha^{\prime}<\alpha\) then \(R_{\alpha^{\prime}} \subset R_{\alpha}\). Then the \(P\)-value of \(H_{0}\) for data \(X=x\) is inf \(_{\alpha \in[0,1]}\left\{\alpha: x \in R_{\alpha}\right\}\).


A descriptive but technical definition is given by Sander Greenland below. The description can seem dense, so feel free to skip over it for now and revisit it after reading the rest of the post.


A single \(P\)-value \(p\) is the quantile location of a directional measure of divergence \(t\) = \(t(y;M)\) of the data point \(y\) (usually, the vector in \(n\)-space formed by \(n\) individual observations) from a test model manifold \(M\) in the \(n\)-dimensional expectation space defined the logical structure of the data generator (“experiment” or causal structure) that produced the data \(y\). \(M\) is the subset of the \(Y\)-space into which the conjunction of the model constraints (assumptions) force the data expectation or predict where y would be were there no ‘random’ variability. I also use \(M\) to denote the set of all the model constraints, as well as their conjunction.

With this logical set-up, the observed \(P\)-value is the quantile \(p\) for the observed value \(t\) of \(T\) = \(t(Y;M)\). This \(p\) is read off a reference distribution \(F = F(t;M)\) for \(T\) derived from \(M\). This formulation is essentially that of the “value of P” appearing in Pearson’s seminal 1900 paper on goodness-of-fit tests. Notably, his famed chi-squared statistic is the squared Euclidean distance from \(y\) to \(M\), with coordinates expressed in standard-deviation units derived from \(M\).

More broadly, the statistic \(T\) can be taken as a measure of divergence of a more general embedding or background model manifold \(A\) (which includes all ‘auxiliary’ assumptions) from a more restrictive model \(M\), with the goodness-of-fit case taking \(A\) as a saturated model covering the entire observation space, and the more common “hypothesis testing” case taking M as the conjunction of an unsaturated \(A\) with a targeted ‘test’ constraint (or set of constraints) \(H\). This \(H\) is logically independent of \(A\) and consistent with \(A\), with \(M\) = \(H\) & \(A\) in logical terms, or \(M\) = \(H\) + \(A\) in set-theoretic terms with + being union (in particular, we assume no element in \(H\) is entailed or contradicted by \(A\) and no element in \(A\) is entailed or contradicted by \(H\)).


Misleading Definitions


It is very common to see the \(P\)-value defined as


The probability of obtaining test results at least as extreme as the results actually observed, under the assumption that the null hypothesis is correct.


Indeed, this is the actual definition currently given on the Wikipedia page for the topic, however, it is inadequate and misleading because it hides and reifies the other assumptions used to compute the \(P\)-value and exclusively focuses on the null hypothesis.

The test hypothesis (often the null hypothesis) is only one component of the entire model that is being tested. This is reflected in the first definition I gave above, which explicitly emphasizes that every model assumption must be true. Thus, the \(P\)-value is sensitive to all these assumptions and their violation(s).


Auxilliary Assumptions


Some of these key assumptions behind the computation of a \(P\)-value are that some sort of random process was employed (random sampling, random assignment, etc.), that there are no uncontrolled sources of bias (confounding, programming errors, equipment defects, sparse-data bias)6 in the results, and that the test hypothesis (often the null hypothesis) is correct. Some of these assumptions can be seen in the figure below from7, which will be discussed later on. This entire set of assumptions is generally referred to as the test model, and that is because the entire assumed model is being tested.


P-value assumptions
Conditional versus unconditional interpretations of P-values, S-values, and compatibility intervals (CIs). (A) Conditional interpretation, in which background model assumptions, such as no systematic error, are assumed to be correct; thus,the information provided by the P-value and S-value is targeted towards the test hypothesis. (B)Unconditional interpretation, in which no aspect of the statistical model is assumed to be correct; thus,the information provided by the P-value and S-value is targeted toward the entire test model.

We often start from the position that all those assumptions are correct (hence, we “condition” on them, even though they are often not correct)7 when calculating the \(P\)-value, so that any deviation of the data from what was expected under those assumptions would be purely random error. But in reality such deviations could also be the result of any assumptions being false, including but not limited to the test hypothesis.


Note: “Conditioning” here refers to taking the assumptions in the model as given, and should not be confused with conditional probability.


For example, in high-energy physics, neutrinos were found in one study to be faster than light due to the resulting large test statistic and corresponding small \(P\)-value, but this result was later found to be a result of a defect in the fiber-optic timing system for that experiment.8 Thus, the low \(P\)-value was not because the assumed null hypothesis was false, but instead due to a bias in the procedure.

So the \(P\)-value cannot be the probability of one of these assumptions, such as “the probability of getting results due to chance alone.” A statement like this is backwards because it’s quantifying one of the assumptions behind the computation of a \(P\)-value.

This assumption of chance causing the results is assumed to be true (aka 100%) along with several other things, when calculating the \(P\)-value, but this does not mean it is actually correct and the calculation of the \(P\)-value cannot be the probability of one of those assumptions.


Probability of What?


It is also important to clarify that \(P\)-values are not probabilities of data or parameter values, which many like to say to differentiate from probabilities of hypotheses. Rather, \(P\)-values are probabilities of “data features”, such as test statistics (i.e. a z-score or \(\chi^{2}\) statistic) or can be interpreted as the percentile at which the observed test statistic falls within the expected distribution for the test statistic, assuming all the model assumptions are true.9, 10


Properties (Uniformity)


A \(P\)-value is considered to be valid if over repeated trials it would be uniform when the tested hypothesis and all other assumptions used to compute the \(P\)-value are correct (see the histogram below to see what this looks like). Typically, this test hypothesis is a null hypothesis where the tested parameter value is usually 0 or 1, but this property applies to any test hypothesis for any parameter value. Thus, there is the random variable \(P\), which (when valid) follows this uniform distribution, and the realization of this random variable, \(p\), which is the observed \(P\)-value. The latter is what most researchers are interpreting from studies.

Thus, if we were to simulate two variables that are practically the same (meaning there’s no difference between them) and then compare them, say, using a t-test, and we were to iterate this process 10000 times and plot the distribution of the observed P-values, it would be uniform, indicating that any P-value within the interval from 0-1 is just as likely as any other to be observed.


#' @title Simulation of valid P-values where test hypothesis is true
#' @param X The first variable we are simulating 
#' @param Y The second variable we are simulating 
#' @param n.sim # The number of simulations
#' @param t The object storing the t-test results
#' @param t.sim # Empty numeric vector to contain values
#' @param n.samp # Sample size in each group
#' @NOTE The null hypothesis does not have to be 0, it can be any value.

n.sim <- 10000
t.sim <- numeric(n.sim)
n.samp <- 1000

for (i in 1:n.sim) {
  X <- rnorm(n.samp, mean = 0, sd = 1)
  Y <- rnorm(n.samp, mean = 0, sd = 1)
  df <- data.frame(X, Y)
  t <- t.test(X, Y, mu = 0, paired = FALSE, 
              var.equal = TRUE, data = df)
  t.sim[i] <- t[[3]]
}


Many frequentist statisticians do not consider \(P\)-values to be valid/useful if they fail to meet this validity criterion of being uniform, hence they do not recognize variants such as the posterior predictive \(P\)-value (which concentrates around values such as 0.5, rather than being uniform) to be valid.

Indeed, there have been great efforts to calibrate the \(P\)-value which ranges from mathematical solutions such as taking the \((1 + [-e*p*\log(p)]^{-1})^{-1}\) which gives the lower bound on the conditional type I error,11, 12 to taking the \(C_{1}(K):=\sqrt{K}-1\) of the \(P\)-value (the square-root calibrator), yielding a test martingale,13 or even empirically attempting to recalibrate the \(P\)-value by collecting observed \(P\)-values from observational studies with negative controls (“test-hypotheses where the exposure is not believed to cause the outcome”) and using them to calculate the empirical null distribution.14

The latter is done since observational studies are prone to several more biases than controlled, randomized experiments, thus the observed \(P\)-values and estimated effect sizes are used to calculate the systematic errors within the sampling distribution and are used for recalibration of the \(P\)-value. Whether or not this approach is effective, however, is a different matter.15 In short, calibration is an often sought-out property of \(P\)-values.

Many frequentist statisticians do not consider \(P\)-values to be valid/useful if they fail to meet this validity criterion of being uniform, hence they do not recognize variants such as the posterior predictive \(P\)-value (which concentrates around values such as 0.5, rather than being uniform) to be valid.

Indeed, there have been great efforts to calibrate the \(P\)-value which ranges from mathematical solutions such as taking the \((1 + [-e*p*\log(p)]^{-1})^{-1}\) which gives the lower bound on the conditional type I error,11, 12 to taking the \(C_{1}(K):=\sqrt{K}-1\) of the \(P\)-value (the square-root calibrator), yielding a test martingale,13 or even empirically attempting to recalibrate the \(P\)-value by collecting observed \(P\)-values from observational studies with negative controls (“test-hypotheses where the exposure is not believed to cause the outcome”) and using them to calculate the empirical null distribution.14

The latter is done since observational studies are prone to several more biases than controlled, randomized experiments, thus the observed \(P\)-values and estimated effect sizes are used to calculate the systematic errors within the sampling distribution and are used for recalibration of the \(P\)-value. Whether or not this approach is effective, however, is a different matter.15 In short, calibration is an often sought-out property of \(P\)-values.


The Different Interpretations


The Decision-Theoretic Approach


Many researchers interpret the \(P\)-value in a behavioral, decision-guiding way such as being statistically significant or not (defined below) depending on whether observed p from a study (the realization of the random variable \(P\)) falls below a fixed cutoff level (\(\alpha\), which is the maximum tolerable type I error rate).16


Statistical Significance


Thus, in this approach, users do not care how small or large the observed \(P\)-value \(p\) is, but simply, whether or not it fell beneath the pre-specified \(\alpha\) level (often 0.05). If it falls below \(\alpha\) they behave inline with the rejection of this test hypothesis, and if it fails to fall below \(\alpha\), then they must behave in a manner where they accept this test hypothesis. The phrase statistical significance, simply indicates that the observed \(P\)-value \(p\) fell below this pre-specified \(\alpha\) level, and nothing else. It does not indicate any meaningful significance on its own.

The pioneers of this approach, Jerzy Neyman and Egon Pearson, define this behavioral guidance in their 1933 paper, “On the Problem of the Most Efficient Tests of Statistical Hypotheses”16


Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behavior with regard to them, in following which we insure that, in the long run of experience, we shall not be too often wrong.


This decision-making framework may be useful in certain scenarios,17 where some sort of randomization is possible, where experiments can be repeated, and where there is large control over the experimental conditions, with one of the most notable historical examples being Egon Pearson (son of Karl Pearson and coauthor of Jerzy Neyman) using it to improve quality control in industrial settings.

Contrary to some claims,18 this approach does NOT require exact replications of the experiments, instead, it requires that a valid \(\alpha\) level is used consistently.16, 19 In this approach, the exact, observed \(P\)-value from a study is not as relevant and cannot validly be interpreted without an entire set of studies that are compared to the fixed error rate (\(\alpha\)).


Picture of the giants who founded frequentist statistics such as Egon Pearson, Ronald Fisher, and Jerzy Neyman
From left to right: Ronald A. Fisher, Jerzy Neyman, and Egon Pearson.

The Inductive Approach


Others interpret the \(P\)-value \(p\) in an inductive inferential/evidential (Fisherian) way,20, 21 as a continuous measure of evidence against the very test hypothesis and entire model (all assumptions) used to compute it (let’s go with this for now, even though there are some problems with this interpretation, more on that below).

This interpretation as a continuous measure of evidence against the test hypothesis and the entire model used to compute it can be seen in the figure below from7. In one framework (left panel), we may assume certain assumptions to be true (“conditioning” on them, i.e, use of random assignment), and in the other (right panel), we question all assumptions, hence the “unconditional” interpretation. Unlike the Neyman-Pearson approach, this inferential approach allows interpretation of \(P\)-values from single studies, and indeed, lower values of it are taken as more evidence against the tested hypothesis.


Null-Hypothesis Significance Testing


However, it is also worth pointing out that most individuals do not interpret \(P\)-values from a Neyman-Pearson or Fisherian standpoint, rather, they fuse both approaches together, which is what we commonly know today as “null-hypothesis significance testing.” This approach is regarded by most as being a incompatible hybrid given that it often confuses error rates (\(\alpha\), \(\beta\)), which are fixed before a study, with the \(P\)-value, which is not a fixed error-rate, and the fusion of these approaches often has been blamed for the replication crisis in science by many statisticians. Though some believe these approaches can be reconciled and are useful.22


P-value assumptions
Conditional versus unconditional interpretations of P-values, S-values, and compatibility intervals (CIs). (A) Conditional interpretation, in which background model assumptions, such as no systematic error, are assumed to be correct; thus,the information provided by the P-value and S-value is targeted towards the test hypothesis. (B)Unconditional interpretation, in which no aspect of the statistical model is assumed to be correct; thus,the information provided by the P-value and S-value is targeted toward the entire test model.

Back to the Fisherian approach, the interpretation of the \(P\)-value as a continuous measure of evidence against the test model that produced it shouldn’t be confused with other statistics that serve as support measures. Likelihood ratios and Bayes factors are absolute measures of evidence for a model compared to another model, whereas the \(P\)-value is a relative measure of “evidence” (more on that below) that can be tricky to interpret.2325 Indeed, this is why the \(P\)-value is converted by some Bayesians to a lower bound of the Bayes factor by taking \(-e*p*\log(p)\).11, 12


Measure of Compatibility


The \(P\)-value is not an absolute measure of evidence for a model (such as the null/alternative model), it is a continuous measure of the compatibility of the observed data with the model used to compute it.3

If it’s high, it means the observed data are very compatible with the model used to compute it. If it’s very low, then it indicates that the data are not as compatible with the model used to calculate it, and this low compatibility may be due to random variation and/or it may be due to a violation of assumptions (such as the null model not being true, not using randomization, a programming error or equipment defect such as that seen with neutrinos, etc.).

Low compatibility of the data with the model can be implied as evidence against the test hypothesis, if we accept the rest of the model used to compute the \(P\)-value. Thus, lower \(P\)-values from a Fisherian perspective are seen as stronger evidence against the test hypothesis given the rest of the model.


Common, Misleading Criticisms


Estimation and Intervals


A common criticism put forth by many is that \(P\)-values are useless, given that they cannot tell you the size of the effect and because they are confounded by sample size and effect size, and that researchers should instead give compatibility (confidence) intervals. However, this criticism is nonsensical as they can both be given and serve different purposes.

A \(P\)-value for a particular parameter value gives the compatibility between the test model in question, which will vary from one parameter value to the next, and the data. An interval estimate such as a 95% frequentist interval simply gives the region of parameter values with \(P\)-values above the corresponding \(\alpha\) level, and which are more consistent with the data than the parameter values outside the interval limits. An interval estimate by itself does not explicitly tell one how consistent a parameter value is with the data, which the \(P\)-value does.


Overstating the Evidence


\(P\)-values are routinely criticized for overstating the amount of evidence from a study. Such statements are also often given using Bayesian arguments, of which many are skeptical. However, the \(P\)-value cannot overstate evidence as it is simply providing the location at which the test statistic fell in the expected distribution, given that every model assumption were true. It is simply indicative of how surprising/extreme the observed result was, given certain assumptions.

Any overstating of evidence, is not an issue of the statistic itself, but rather users. If we treat the \(P-\) value as nothing more or less than a continuous measure of compatibility of the observed data with the model used to compute it (observed \(p\)) given certain model assumptions, we won’t run into some of the common misinterpretations such as “the \(P\)-value is the probability of a hypothesis”, or the “probability of chance alone”, or “the probability of being incorrect”.3

Indeed, many of the “problems” commonly associated with the \(P\)-value are not due to the actual statistic itself, but rather researchers’ misinterpretations of what it is and what it means for a study.

The answer to these misconceptions may be compatibilism, with less compatibility (smaller \(P\)-values) indicating a poor fit between the data and the test model and hence more evidence against the test hypothesis.

A \(P\)-value of 0.04 means that assuming that all the assumptions of the model used to compute the \(P\)-value are correct, we won’t get data (a test statistic) at least as extreme as what was observed by random variation more than 4% of the time.

To many, such low compatibility between the data and the model may lead them to reject the test hypothesis (the null hypothesis).


Some Valid Issues


Mismatch With Direction


If you recall from above, I wrote that the \(P\)-value is seen by many as being a continuous measure of evidence against the test hypothesis and model. Technically speaking, it would be incorrect to define it this way because as the \(P\)-value goes up (with the highest value being 1 or 100%), there is less evidence against the test hypothesis since the data are more compatible with the test model. 1 = perfect compatibility of the data with the test model.

As the \(P\)-value gets lower (with the lowest value being 0), there is less compatibility between the data and the model, hence more evidence against the test hypothesis used to compute \(p\).

Thus, saying that \(P\)-values are measures of evidence against the hypothesis used to compute them is a backward definition. This definition would be correct if higher \(P\)-values inferred more evidence against the test hypothesis and vice versa.


Difficulties Due to Scale


Another problem with \(P\)-values and their interpretation is scaling. Since the statistic is meant to be a continuous measure of compatibility (and relative evidence against the test model + hypothesis), we would hope that differences between \(P\)-values would be equal (on an additive scale), as this makes it easier to interpret.

For example, the difference between 0 and 10 dollars is the same as the difference between 90 and 100 dollars, in that both are a difference of 10 dollars. And this property remains consistent across various intervals, 120 and 130, 1,000,000 and 1,000,010.

Unfortunately, this doesn’t apply to the \(P\)-value because it is on the inverse-exponential scale. The difference between a \(P\)-value of 0.01 and 0.10 is not the same as the difference between 0.90 and 0.99.


Gaussian distribution
A gaussian probability densitiy with the standard deviations annotated. Data points further away from the mean, are more extreme and unlikely events. I also must admit that this is one of my favorite figures of a gaussian distribution.


For example, with a normal distribution (above), a z-score of 0 results in a \(P\)-value of 1 (perfect compatibility). If we now move to a z-score of 1, the \(P\)-value is 0.31. Thus, we saw a dramatic decrease from a \(P\)-value of 1 to 0.31 with one z-score. A 0.69 decrease in the \(P\)-value.

Now let’s move from a z-score of 1 to a z-score of 2. We saw a decrease of 0.69 with the change in one z-score before, so the new \(P\)-value must be 0.31 - 0.69 = -0.38 right? No. The \(P\)-value for a z-score of 2 is 0.045. The \(P\)-value for a z-score of 3 is 0.003. Even though we’ve only been moving by one z-score at a time, the changes in \(P\)-values don’t remain constant; the decreases become larger and larger.

Thus, the difference between the \(P\)-values of 0.01 and 0.10, in terms of z-score, is substantially larger than the difference between 0.90 and 0.99. Again, this makes it difficult to interpret as a statistic across the board, especially as a continuous measure. This can further be seen in the figure from Rafi & Greenland (2020).

Resolution with Surprisals


The issues described above such as the backward definition and the problem of scaling can make it difficult to conceptualize the \(P\)-value as being an evidence measure against the test hypothesis and test model. However, these issues can be addressed by taking the negative log of the \(P\)-value \(–\log_{2}(p)\) , which yields something known as the Shannon information value or surprisal (\(s\)) value,4, 5, 26 named after Claude Shannon, the father of information theory.27

Unlike the \(P\)-value, this value is not a probability but a continuous measure of information in bits of information against the test hypothesis and is taken from the observed test statistic computed by the test model.

It also provides a more intuitive way to think about \(P\)-values. Imagine that the variable \(k\) is always the nearest integer to the calculated value of \(s\). Now, take for example a \(P\)-value of 0.05, the \(S\)-value for this would be \(s\) = \(–\log_{2}(0.05)\) which equals 4.3 bits of information embedded in the test statistic, which can be implied as evidence against the test hypothesis.

How much evidence is this? \(k\) can help us think about this. The nearest integer to 4.3 is 4. Thus, the data which yield a \(P\)-value of 0.05 which results in an \(s\) value of 4.3 bits of information is no more surprising than getting all heads on 4 fair coin tosses.

Another example. Let’s say our study gives us a \(P\)-value of 0.005, which would indicate to many very low compatibility between the test model and the observed data; this would yield an \(s\) value of \(–\log_{2}(0.005) = 7.6\) bits of information. \(k\) which is the closest integer to \(s\) would be 8. Thus, these data which yield a \(P\)-value of 0.005 are no more surprising than getting all heads on 8 fair coin tosses.

A table of various \(P\)-values and their corresponding \(S\)-values, maximum-likelihood ratios, and likelihood-ratio statistics can be found below from Rafi & Greenland (2020), which includes the general cutoffs used in different scientific fields such as high-energy physics and genome-wide association studies. It also shows how the traditional cutoffs used in these fields can be problematic.

For example, an \(\alpha\) of 0.05, which only corresponds to seeing all heads on 4 fair coin tosses, is practically nothing when compared to the cutoffs used in particle physics and GWAS, which correspond to seeing all heads on 22 and 30 fair coin tosses, respectively.


P-value (compatibility) S-value (bits) Maximum Likelihood Ratio Deviance Statistic 2ln(MLR)
0.99 0.01 1.00e+00 0.00
0.9 0.15 1.01e+00 0.02
0.5 1.00 1.26e+00 0.45
0.25 2.00 1.94e+00 1.32
0.1 3.32 3.87e+00 2.71
0.05 4.32 6.83e+00 3.84
0.025 5.32 1.23e+01 5.02
0.01 6.64 2.76e+01 6.63
0.005 7.64 5.14e+01 7.88
1e-04 13.29 1.94e+03 15.10
5 sigma (~ 2.9 in 10 million) 21.70 5.20e+05 26.30
1 in 100 million (GWAS) 26.60 1.40e+07 32.80
6 sigma (~ 1 in a billion) 29.90 1.30e+08 37.40
Abbreviations:
Table 1: \(P\)-values and binary \(S\)-values, with corresponding maximum-likelihood ratios (MLR) and deviance (likelihood-ratio) statistics for a simple test hypothesis H under background assumptions A

Unlike the \(P\)-value, the \(S\)-value is more intuitive as a measure of refutational evidence against the test hypothesis since its value (bits of information against the test hypothesis) increases with less compatibility, whereas the opposite is true for the \(P\)-value.


Some Examples


Let’s try using some data to see this in action. I’ll take a sample experimental dataset from R on the effects of different conditions on dried plant weight. We can plot the data and run a one-way ANOVA.


pg <- force(PlantGrowth)
(Hmisc::describe(pg))
#> pg 
#> 
#>  2  Variables      30  Observations
#> ----------------------------------------------------------------------------------------------------------------------------------
#> weight 
#>        n  missing distinct     Info     Mean      Gmd      .05      .10      .25      .50      .75      .90      .95 
#>       30        0       29        1    5.073   0.8131    3.983    4.170    4.550    5.155    5.530    6.038    6.132 
#> 
#> lowest : 3.59 3.83 4.17 4.32 4.41, highest: 5.87 6.03 6.11 6.15 6.31
#> ----------------------------------------------------------------------------------------------------------------------------------
#> group 
#>        n  missing distinct 
#>       30        0        3 
#>                             
#> Value       ctrl  trt1  trt2
#> Frequency     10    10    10
#> Proportion 0.333 0.333 0.333
#> ----------------------------------------------------------------------------------------------------------------------------------

Looks interesting. We can see some differences from the graph. Here’s what our test output gives us,


res <- anova(lm(weight ~ group, data = pg))
ztable(res)
Df Sum Sq Mean Sq F value Pr(>F)
group 2 3.77 1.88 4.85 0.02
Residuals 27 10.49 0.39 NA NA

(obs_p <- res[1, 5])
#> [1] 0.0159

If we had set our \(\alpha\) to the traditional 0.05 level before the experiment, we can reject the test hypothesis (the null hypothesis), but that is not as interesting from a continuous evidential perspective. How can I interpret this \(P\)-value of 0.0159 more intuitively?

Let’s convert it into an \(S\)-value.


-log2(obs_p)
#> [1] 5.97

\[–\log_2(0.0159) = 5.97\]


\[s= 5.97\]


That is 5.97 bits of information against the null hypothesis.

Remember, \(k\) is the nearest integer to the calculated value of \(s\) and in this case, would be 6. So these results (the test statistic, \(F\)(4.85)) are as surprising as getting all heads on 6 fair coin tosses. Somewhat surprising, depending on the individual interpreting the results.

How would we interpret it within the context of a given confidence interval? The \(S\)-value tells us that values within the computed 95% CI: have at most 4.3 bits of information against them. That is because all parameter values within a 95% CI have \(P\)-values greater than 0.05.

So those parameter values that are inside the 95% interval estimate have less bits of information against them, than the parameter values that go further and further away from the center of the 95% interval estimate. The point estimate is the most compatible with the data (meaning it has the least refutational information against it), while those values near the limits have more information against them.

In other words, as values head in the directions outside the interval, there is more refutational information against them, as depicted by the following function from Rafi & Greenland, 2020, which is known as the surprisal function.

The \(S\)-value is not meant to replace the \(P\)-value, and it isn’t superior to the \(P\)-value. It is merely a logarithmic transformation of it that rescales it on an additive scale and tells us how much information is embedded within the test statistic and can be used as evidence against the test hypothesis. It is meant to be a device to help interpret the information one obtains from a calculated \(P\)-value.


I’ve constructed a calculator that converts observed \(P\)-values into \(S\)-values and provides an intuitive way to think about them. For a more detailed discussion of \(S\)-values, see this article, in addition to the references below them.


S-value Calculator



Acknowledgments: I’m very grateful to Sander Greenland for his extensive commentary and corrections on several versions of this article. My acknowledgment does not imply endorsement of my views by these colleagues, and I remain solely responsible for the views expressed herein.


The analyses were run on:


#> R version 4.3.2 (2023-10-31)
#> Platform: aarch64-apple-darwin20 (64-bit)
#> Running under: macOS Sonoma 14.3
#> 
#> Matrix products: default
#> BLAS:   /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRblas.0.dylib 
#> LAPACK: /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRlapack.dylib;  LAPACK version 3.11.0
#> 
#> Random number generation:
#>  RNG:     Mersenne-Twister 
#>  Normal:  Inversion 
#>  Sample:  Rejection 
#>  
#> locale:
#> [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
#> 
#> time zone: America/New_York
#> tzcode source: internal
#> 
#> attached base packages:
#>  [1] splines   grid      stats4    parallel  stats     graphics  grDevices utils     datasets  methods   base     
#> 
#> other attached packages:
#>  [1] pbmcapply_1.5.1         texPreview_2.0.0        tinytex_0.49            rmarkdown_2.25          brms_2.20.4            
#>  [6] bootImpute_1.2.1        miceMNAR_1.0.2          knitr_1.45              boot_1.3-28.1           reshape2_1.4.4         
#> [11] ProfileLikelihood_1.3   ImputeRobust_1.3-1      gamlss_5.4-20           gamlss.dist_6.1-1       gamlss.data_6.0-2      
#> [16] mvtnorm_1.2-4           performance_0.10.8      summarytools_1.0.1      tidybayes_3.0.6         htmltools_0.5.7        
#> [21] Statamarkdown_0.9.2     car_3.1-2               carData_3.0-5           qqplotr_0.0.6           ggcorrplot_0.1.4.1     
#> [26] Amelia_1.8.1            Rcpp_1.0.11             blogdown_1.18.1         doParallel_1.0.17       iterators_1.0.14       
#> [31] foreach_1.5.2           lattice_0.22-5          bayesplot_1.10.0        wesanderson_0.3.7       VIM_6.2.2              
#> [36] colorspace_2.1-0        here_1.0.1              progress_1.2.3          loo_2.6.0               mi_1.1                 
#> [41] Matrix_1.6-4            broom_1.0.5             yardstick_1.2.0         svglite_2.1.3           Cairo_1.6-2            
#> [46] cowplot_1.1.2           mgcv_1.9-1              nlme_3.1-164            xfun_0.41               broom.mixed_0.2.9.4    
#> [51] reticulate_1.34.0       kableExtra_1.3.4        posterior_1.5.0         checkmate_2.3.1         parallelly_1.36.0      
#> [56] miceFast_0.8.2          ggmice_0.1.0            randomForest_4.7-1.1    missForest_1.5          miceadds_3.16-18       
#> [61] mice.mcerror_0.0.0-9000 mice_3.16.0             quantreg_5.97           SparseM_1.81            MCMCpack_1.6-3         
#> [66] MASS_7.3-60             coda_0.19-4             latex2exp_0.9.6         rstan_2.32.3            StanHeaders_2.26.28    
#> [71] cmdstanr_0.5.3          lubridate_1.9.3         forcats_1.0.0           stringr_1.5.1           dplyr_1.1.4            
#> [76] purrr_1.0.2             readr_2.1.4             tibble_3.2.1            ggplot2_3.4.4           tidyverse_2.0.0        
#> [81] ggtext_0.1.2            concurve_2.8.0          showtext_0.9-6          showtextdb_3.0          sysfonts_0.8.8         
#> [86] future.apply_1.11.1     future_1.33.1           tidyr_1.3.0             magrittr_2.0.3          rms_6.7-1              
#> [91] Hmisc_5.1-1            
#> 
#> loaded via a namespace (and not attached):
#>   [1] igraph_1.6.0           Formula_1.2-5          rematch2_2.1.2         devtools_2.4.5         tidyselect_1.2.0      
#>   [6] rvest_1.0.3            pspline_1.0-19         bridgesampling_1.1-2   urlchecker_1.0.1       rngtools_1.5.2        
#>  [11] png_0.1-8              cli_3.6.2              arrayhelpers_1.1-0     askpass_1.2.0          openssl_2.1.1         
#>  [16] textshaping_0.3.7      officer_0.6.3          curl_5.2.0             mime_0.12              evaluate_0.23         
#>  [21] V8_4.4.1               stringi_1.8.3          desc_1.4.3             backports_1.4.1        gsl_2.1-8             
#>  [26] qqconf_1.3.2           ismev_1.42             httpuv_1.6.13          details_0.3.0          ADGofTest_0.3         
#>  [31] KMsurv_0.1-5           doRNG_1.8.6            pcaPP_2.0-4            survminer_0.4.9        DT_0.31               
#>  [36] webshot_0.5.5          sessioninfo_1.2.2      DBI_1.2.0              jquerylib_0.1.4        withr_2.5.2           
#>  [41] class_7.3-22           systemfonts_1.0.5      rprojroot_2.0.4        lmtest_0.9-40          benchmarkme_1.0.8     
#>  [46] colourpicker_1.3.0     htmlwidgets_1.6.4      fs_1.6.3               trust_0.1-8            GJRM_0.2-6.4          
#>  [51] ranger_0.16.0          DEoptimR_1.1-3         zoo_1.8-12             itertools_0.1-3        svUnit_1.0.6          
#>  [56] pbivnorm_0.6.0         timechange_0.2.0       fansi_1.0.6            caTools_1.18.2         extremevalues_2.3.3   
#>  [61] data.table_1.14.10     sampleSelection_1.2-12 pan_1.9                psych_2.3.12           clipr_0.8.0           
#>  [66] ellipsis_0.3.2         yaml_2.3.8             survival_3.5-7         crayon_1.5.2           tensorA_0.36.2.1      
#>  [71] later_1.3.2            gfonts_0.2.0           codetools_0.2-19       base64enc_0.1-3        profvis_0.3.8         
#>  [76] shape_1.4.6            startupmsg_0.9.6       estimability_1.4.1     gdtools_0.3.5          foreign_0.8-86        
#>  [81] pkgconfig_2.0.3        xml2_1.3.6             mathjaxr_1.6-0         ggpubr_0.6.0           sfsmisc_1.1-16        
#>  [86] evd_2.3-6.1            viridisLite_0.4.2      xtable_1.8-4           highr_0.10.1           plyr_1.8.9            
#>  [91] httr_1.4.7             tools_4.3.2            globals_0.16.2         pkgbuild_1.4.3         htmlTable_2.4.2       
#>  [96] distrEx_2.9.0          shinyjs_2.1.0          crosstalk_1.2.1        miscTools_0.6-28       maxLik_1.5-2          
#>  [ reached getOption("max.print") -- omitted 126 entries ]

References


1. Gigerenzer G. (2018). ‘Statistical Rituals: The Replication Delusion and How We Got There. Advances in Methods and Practices in Psychological Science. 1:198–218. doi: 10.1177/2515245918771329.
2. Goodman S. (2008). ‘A Dirty Dozen: Twelve P-Value Misconceptions. Seminars in Hematology. 45:135–140. doi: 10.1053/j.seminhematol.2008.04.003.
3. Greenland S, Senn SJ, Rothman KJ, Carlin JB, Poole C, Goodman SN, et al. (2016). ‘Statistical tests, P values, confidence intervals, and power: A guide to misinterpretations’. European Journal of Epidemiology. 31:337–350. doi: 10.1007/s10654-016-0149-3.
4. Rafi Z, Greenland S. (2020). ‘Semantic and cognitive tools to aid statistical science: Replace confidence and significance by compatibility and surprise’. BMC Medical Research Methodology. 20:244. doi: 10.1186/s12874-020-01105-9.
5. Greenland S. (2019). ‘Valid P-values behave exactly as they should: Some misleading criticisms of P-values and their resolution with S-values’. The American Statistician. 73:106–114. doi: 10.1080/00031305.2018.1529625.
6. Greenland S, Mansournia MA, Altman DG. (2016). ‘Sparse data bias: A problem hiding in plain sight’. BMJ. 352:i1981. doi: 10.1136/bmj.i1981.
7. Greenland S, Rafi Z. (2020). ‘To Aid Scientific Inference, Emphasize Unconditional Descriptions of Statistics. arXiv:190908583 [statME]. https://arxiv.org/abs/1909.08583.
8. Moskowitz C. (2012). ‘Faster-than-light neutrinos aren’t’. Scientific American.
9. Perezgonzalez JD. (2015). ‘P-values as percentiles. Commentary on: Null hypothesis significance tests. A mixup of two different theories: The basis for widespread confusion and numerous misinterpretations”. Frontiers in Psychology. 6. doi: 10.3389/fpsyg.2015.00341.
10. Fraser DAS. (2019). ‘The P-value function and statistical inference’. The American Statistician. 73:135–147. doi: 10.1080/00031305.2018.1556735.
11. Sellke T, Bayarri MJ, Berger JO. (2001). ‘Calibration of \(\rho\) values for testing precise null hypotheses’. The American Statistician. 55:62–71. doi: 10.1198/000313001300339950.
12. Greenland S, Rafi Z. (2020). ‘Technical Issues in the Interpretation of S-values and Their Relation to Other Information Measures. arXiv:200812991 [statME]. https://arxiv.org/abs/2008.12991.
13. Shafer G, Shen A, Vereshchagin N, Vovk V. (2011). ‘Test Martingales, Bayes Factors and p-Values. Statistical Science. 26:84–101. doi: fkcvt5.
14. Schuemie MJ, Hripcsak G, Ryan PB, Madigan D, Suchard MA. (2016). ‘Robust empirical calibration of p-values using observational data’. Statistics in Medicine. 35:3883–3888. doi: ghqmsb.
15. Gruber S, Tchetgen ET. (2016). ‘Limitations of empirical calibration of p-values using observational data’. Statistics in Medicine. 35:3869–3882. doi: ghqmtn.
16. Neyman J, Pearson ES. (1933). ‘On the Problem of the Most Efficient Tests of Statistical Hypotheses. Philosophical Transactions of the Royal Society of London Series A, Containing Papers of a Mathematical or Physical Character. 231:289–337. doi: 10.1098/rsta.1933.0009.
17. Whitehead J. (1993). ‘The case for frequentism in clinical trials’. Statistics in Medicine. 12:1405–1413. doi: 10.1002/sim.4780121506.
18. Rubin M. (2019). ‘What type of Type I error? Contrasting the Neyman and Fisherian approaches in the context of exact and direct replications’. Synthese. doi: 10.1007/s11229-019-02433-0.
19. Lehmann EL. (2011). ‘Fisher, Neyman, and the Creation of Classical Statistics. Springer New York. doi: 10.1007/978-1-4419-9500-1.
20. Fisher RA. (1935). ‘The Design of Experiments. Oxford, England: Oliver & Boyd.
21. Fisher R. (1955). ‘Statistical Methods and Scientific Induction. Journal of the Royal Statistical Society Series B (Methodological). 17:69–78. doi: 10.1111/j.2517-6161.1955.tb00180.x.
22. Bickel DR. (2019). ‘Null Hypothesis Significance Testing Defended and Calibrated by Bayesian Model Checking. The American Statistician. 0:1–16. doi: 10.1080/00031305.2019.1699443.
23. Jeffreys H. (1935). ‘Some Tests of Significance, Treated by the Theory of Probability. Mathematical Proceedings of the Cambridge Philosophical Society. 31:203–222. doi: 10.1017/S030500410001330X.
24. Jeffreys H. (1998). ‘The Theory of Probability. OUP Oxford.
25. Royall R. (1997). ‘Statistical Evidence: A Likelihood Paradigm. CRC Press.
26. Cole SR, Edwards JK, Greenland S. (2020). ‘Surprise!’ American Journal of Epidemiology. doi: gg63md.
27. Shannon CE. (1948). ‘A mathematical theory of communication’. The Bell System Technical Journal. 27:379–423. doi: 10.1002/j.1538-7305.1948.tb01338.x.


Help support the website!






See also:

comments powered by Disqus
Article Progress Tracker