# What Makes a Sensitivity Analysis?

## The Rise of Crony Statistical Rituals

This post will be split into two sections, in the first part, I will discuss the nature and difficulty of conducting sensitivity analyses and the quality of many published sensitivity analyses in various scientific fields, including medicine. I will then discuss a general framework that has existed for a long time but seen a major resurgence by regulatory agencies around the world that may help researchers conduct these analyses and other analyses in general.

In the second part of this post, I will use data from a historical parallel-arm RCT and analyze it from start to finish, however, the caveat is that there are extensive missing data, therefore I will have to utilize a number of non ad-hoc sensitivity analysis and missing data techniques to better understand these data were collected and how informative they are. Much of the code in this second part will be in R and Stata, but references will also be given to specific techniques and theory.

# Cargo-Cult Uncertainty

Sensitivity analyses are an important part of statistical science and many other disciplines when conducted in a principle and systematic manner. However, in the published sensitivity analyses literature, there are many inconsistencies, misconceptions, and highly misleading findings from these analyses. A giant obstacle that prevents these issues from recurring is that they (sensitivity analysis techniques) are quite difficult to learn and often advanced statistical methods that even many statisticians have difficulty with.

Yet, the appearance of difficulty does not dissuade certain researchers away from adopting them for their own work, (so that they may give their colleagues and their stakeholders that they report to, the impression of rigor and methodological expertise), and when this is done mindlessly, researchers will often skip over learning the details and theory, and like many statistical procedures, they will rely on default settings built in the statistical software.

While there are many suites/commands/functions/libraries available to conduct such analyses, much of these procedures masquerade as meticulous sensitivity analyses and to the users, are often a formality to appease stakeholders and give researchers a false sense of confidence about what they are doing. And yet, their users have little to no idea what they’re actually doing. Thus, like many statistical procedures new and old, they too will inevitably be abused as they become more popular and as they are required in research reports.

As Stark & Saltelli (1), along with many others such as Gigerenzer (2) and Greenland (3) have written in the past. Below is an excerpt where Stark describes that much of statistics is simply people masquerading as data analysts and being rewarded as experts despite not having a single clue what they’re doing.

In our experience, many applications of statistics are cargo-cult statistics: practitioners go through the motions of fitting models, computing p-values or confidence intervals, or simulating posterior distributions. They invoke statistical terms and procedures as incantations, with scant understanding of the assumptions or relevance of the calculations, or even the meaning of the terminology. This demotes statistics from a way of thinking about evidence and avoiding self-deception to a formal “blessing” of claims. The effectiveness of cargo-cult statistics is predictably uneven. But it is effective at getting weak work published - and is even required by some journals…

Here, Stark takes on the issue of widely accessible statistical software for academics, students, and analysts.

Statistical software does not help you know what to compute, nor how to interpret the result. It does not offer to explain the assumptions behind methods, nor does it flag delicate or dubious assumptions. It does not warn you about multiplicity or p-hacking. It does not check whether you picked the hypothesis or analysis after looking at the data, nor track the number of analyses you tried before arriving at the one you sought to publish - another form of multiplicity. The more “powerful” and “user-friendly” the software is, the more it invites cargo-cult statistics. (1)

Greenland (3) focuses on how traditional statistical education is inadequate to deal with the messy and chaotic nature of the world and the missing data that is produced by it. Greenland writes that traditional statistical practice opens the room for several cognitive biases, making many reports and conclusions highly misleading. He provides some hope however, by pointing to the utility of sensitivity analyses for modeling bias and uncertainty.

I argue that current training in statistics and analytical methods is inadequate for addressing major sources of inference distortion, and that it should be expanded to cover the biased perceptual and thinking processes (cognitive biases) that plague research reports. As commonly misused, null-hypothesis significance testing (NHST) combines several cognitive problems to create highly distorted interpretations of study results. Interval estimation has proven highly vulnerable to the same problems. Sensitivity and bias analyses address model uncertainties by varying and relaxing assumptions, but (like Bayesian analyses) they are difficult to perform with proper accounting for prior information and are easily manipulated because they depend on specification of many models and parameters.

However, he then follows up to discuss how difficult sensitivity and bias analyses can be due to the plethora of decisions that the analyst must make when specifying parameters in the analysis. These judgements will differ from researcher to researcher, even in the same group, and such analyses have not been studied as widely as conventional statistical practice.

Even with realistic choices, the sensitivity of sensitivity and bias analyses must be evaluated (51). The plausibility of an estimated bias function is determined by intuitions, prejudices, and understanding of the applied context; those can vary dramatically across researchers, in turn leading to very different specifications and inferences even if they are anchored to the same conventional analysis. Adding to this problem, sensitivity and bias analyses are more difficult to perform correctly and more easily massaged toward preferred conclusions, because they require specification of many more equations and their parameters.

And unlike NHST, abuse of sensitivity and bias analysis is as yet barely studied because the pool of such analyses remains small and highly selective. It thus seems implausible that these analyses will increase replicability of inferences, although they can reveal how assumptions affect those inferences. (Here “replicability” is used according to recommendations of the American Statistical Association (52) to denote independent checks of reported results with new data; “reproducibility” then denotes checks of reported results using the original data and computer code.)

Indeed, this is one drawback of (relatively) newly adopted methods/procedures, they have not been studied long enough as say somethng such as a P-value, which has been studied for centuries. Even something as useful as exploring assumptions and uncertainty such as a principled sensitivity analysis is not free from this.

it seems Greenland’s predictions were accurate; in a large systematic review of published sensitivity analyses led by the mathematician Andrea Saltelli and his group, they found that highly cited papers in top impact factor journals rarely contained sensitivity analyses, and that many of these were poorly done and one-dimensional, in the sense of only varying one parameter at a time, which is hardly informative as a multi-dimensional analysis in which multiple parameters are varied at the same time.

Here, we will explore one example of how mindless sensitivity analyses are conducted within the context of clinical trials, although we will not explore multi-dimensional sensitivity analyses here. I make this distinction because the environment to conduct sensitivity/bias analyses differs substantially in trials where regulators are involved versus the environment in which many observational studies are analyzed and presented.

In medicine, sensitivity analyses (SA) are often conducted in trials and observational studies, but again, little thought is given to what they should entail. (1) None of this is surprising given that they are not usually taught in traditional settings, although they have been discussed extensively within the statistical and epidemiological literature (410) Yet, the other barrier that remains is that they are incredibly technical and will require the assistance of an analyst familiar with both the theory and the applications. It also does not help that the topic is vast and there are multiple new statistical methods being published every day.

Indeed, the Handbook of Missing Data Methodology, (11) one of the most authoritative and comprehensive books to date on statistical methods to handle missing data in clinical trials, has nearly six chapters devoted to the topic of sensitivity analyses and different approaches to use. Unfortunately, at the rate that this field is advancing, this book may already be outdated, despite only coming out six years ago.

This article will focus on what a sensitivity analysis is often assumed to be based on practices in the literature, some differing points of view regarding sensitivity analyses (with an example of a highly promoted measure for conducting sensitivity analyses that has been very controversial, and a practical example of conducting principled(by principled, I mean non ad-hoc techniques, as James Carpenter would say) sensitivity analyses in the context of missing data in a randomized clinical trial.

## Vagueness of ‘Sensitivity’

Before we move forward, we must consider what a sensitivity analysis actually entails. The first word of the phrase is an obvious tell, it suggests an analysis that examines how sensitive or robust a result is. But a natural question is , sensitive specifically to what? What sort of perturbations and how relevant are they? Can I vary anything in the analysis to test the sensitivity of the result or assumption? For example, if I obtained an odds ratio of 1.7 from the primary analysis in my experiment and I conducted an extra analysis to see whether the odds ratio would change by a resampling method, is that a sensitivity analysis?

What if I did a completely different type of analysis, for example, suppose the primary analysis was a classical statistical test with an adjustment for multiple comparisons and I decided to run a hierarchical Bayesian regression with a spiked prior on the null, would that be a sensitivity analysis, given they are different frameworks and procedures (although being used for similar goals)? The possibilities of what can be varied are endless when a loose definition of sensitivity analysis is assumed. This is unique to every discipline and their culture. In epidemiology, sensitivity analyses and bias analyses which attempt to extensively explore assumptions and the robustness of the result are used synonymously, yet in the clinical trial world, this is not the case.

## Frequent Misconceptions

Take for example in clinical trials and observational studies, in particular, it is common to see primary analyses often being intent-to-treat (ITT) analyses and sensitivity analyses being per-protocol analyses (PP).

I will not define these terms here and would encourage readers to consult other sources.

The following is a similar example from a high-profile trial published in the The Lancet in 2002, (12)

The Multicentre Aneurysm Screening Study group randomised 67,800 men to receive an invitation to an abdominal ultrasound scan or not [6]. Of those invited to receive an abdominal scan, 20% did not accept. The primary analysis was by intention to treat, thus estimating the effect of being randomised to abdominal ultrasound. Another analysis investigated the complier average causal effect, which considers what the (average) effect of treatment was in patients who would have adhered to protocol however they were randomised [7].

To many, this may seem perfectly fine, even great. The authors used the question they were primarily interested in as the main analysis, and then conducted an additional analysis to see if these results are consistent. Unfortunately, this is highly problematic as Morris et al. (2014) (13) describes

These questions are different, and observing different results should not shake our confidence in either. The CACE analysis was a secondary analysis, not a sensitivity analysis.

It is common for authors to compare the results of intention-to-treat with per-protocol analysis; see for example [8, 9]. While it is hard to pin down the precise question of per-protocol analysis [10], this is clearly different to the question intention-to-treat addresses. Per-protocol analysis should not therefore be considered as a sensitivity analysis for intention-to-treat but as a secondary analysis, if at all.

## Portable Sensitivity Analyses

These sorts of misunderstandings are so common and so prevalent throughout the literature, so it should come as no surprise that sensitivity analyses are rarely done, or they are done incorrectly. Although not in clinical trials, one particular controversial example has been the promotion and use of the $$E$$-value (the ‘E’ apparently stands for ‘evidence’, but I am not sure.) within epidemiology to assess the amount of confounding necessary within an observational study result to practically reduce the effect estimate to something that is practically null.

The method and value has been adopted with open arms by many epidemiologists and health researchers given that it has simplified a task that is often arduous and requires extensive and careful thought in comparison to the traditional classical analysis. Yet, others have also been highly critical of it for a number of reasons, and although they see the value in promoting sensitivity analyses to more researchers around the world, many are concerned that this measure will also eventually go down the road of P-values, but for observational research.

## Regulators & Sensitivity Analyses

As mentioned above, there are many possible ways to conduct a sensitivity analysis, especially if the phrase is used in a loose/vague way. Whether or not these are valid and principled approaches to conducting sensitivity analyses is another question. Luckily, many of us do not have to ponder day and night about the semantics about this because both The Handbook of Missing Data Methodology and the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) have given this topic much thought, and for the latter it is reflected by the fact that they recently created an entire new addition to their E9 guidance document (Statistical Principles for Clinical Trials) which has served as a reference for clinical trial statisticians for decades. In the addendum, titled Addendum on Estimands and Sensitivity Analysis In Clinical Trials, which has now been legally adopted by regulatory agencies around the world including the FDA and EMA, they elaborate on the concept of estimands and the role sensitivity analyses play.

To be clear, estimands are not new and have been discussed in the statistical literature since Tukey (14) but the ICH working group’s addendum formalized its adoption in the clinical trial world and the importance of sensitivity analyses. (1517)

## Estimands & Sensitivity

A.5.2.1. Role of Sensitivity Analysis

Inferences based on a particular estimand should be robust to limitations in the data and deviations from the assumptions used in the statisticral model for the main estimator. This robustness is evaluated through a sensitivity analysis. Sensitivity analysis should be planned for the main estimators of all estimands that will be important for regulatory decision making and labelling in the product information. This can be a topic for discussion and agreement between sponsor and regulator.

The statistical assumptions that underpin the main estimator should be documented. One or more analyses, focused on the same estimand, should then be pre-specified to investigate these assumptions with the objective of verifying whether or not the estimate derived from the main estimator is robust to departures from its assumptions. This might be characterised as the extent of departures from assumptions that change the interpretation of the results in terms of their statistical or clinical significance (e.g. tipping point analysis).

Distinct from sensitivity analysis, where investigations are conducted with the intent of exploring robustness of departures from assumptions, other analyses that are conducted in order to more fully investigate and understand the trial data can be termed “supplementary analysis” (see Glossary; A.5.3.). Where the primary estimand(s) of interest is agreed between sponsor and regulator, the main estimator is pre-specified unambiguously, and the sensitivity analysis verifies that the estimate derived is reliable for interpretation, supplementary analyses should generally be given lower priority in assessment.

A.5.2.2. Choice of Sensitivity Analysis

When planning and conducting a sensitivity analysis, altering multiple aspects of the main analysis simultaneously can make it challenging to identify which assumptions, if any, are responsible for any potential differences seen. It is therefore desirable to adopt a structured approach, specifying the changes in assumptions that underlie the alternative analyses, rather than simply comparing the results of different analyses based on different sets of assumptions. The need for analyses varying multiple assumptions simultaneously should then be considered on a case by case basis. A distinction between testable and untestable assumptions may be useful when assessing the interpretation and relevance of different analyses.

ICH E9 (R1) Guideline

The need for sensitivity analysis in respect of missing data is established and retains its importance in this framework. Missing data should be defined and considered in respect of a particular estimand (see A.4.). The distinction between data that are missing in respect of a specific estimand and data that are not directly relevant to a specific estimand gives rise to separate sets of assumptions to be examined in sensitivity analysis.

they explicitly define a sensitivity analysis as being an analysis which:

• realistically varies the assumptions from the primary analysis,

• still targets the same estimand,

• examines the robustness of the results to

• assumption violations or departures, and

• can possibly change the results/conclusions drawn

They contrast this with more extensive analyses that investigate these violations/departures of assumptions, and characterize those analyse as being supplementary rather than sensitivity. The latter is typically what is seen in rigorous epidemiological studies that employ quantitative bias analyses (a more specialized form of sensitivity analysis).

The ICH E9 addendum contains similar views that are echoed by the National Research Council’s advice on clinical trials (National Research Council 2010) (18) regarding estimands and sensitivity

Recommendation 15: Sensitivity analysis should be part of the primary reporting of findings from clinical trials. Examining sensitivity to the assumptions about the missing data mechanism should be a mandatory component of reporting.

Now, back to the NEJM paper. The reason why an ITT analysis and a PP analysis cannot serve as primary and sensitivity analyses, respectively, is because they are targeting entirely different estimands. (13) Thus, because they are answering completely different questions, they’re just two different analyses. And the additional PP analysis, although important to certain stakeholders, can be classified as a supplementary analysis or a non-sensitivity analysis.

Indeed, the following flowchart from Morris et al. (2014) (13) is particularly useful as a guide to differentiate sensitivity analyses from non-sensitivity analyses.

## Utility of Subject-Matter Experts

So what does a sensitivity analysis actually look like within the context of a clinical trial or experiment? Below, I walk through the analysis of a trial to give some examples.

# Sensitivity in Clinical Trials

I’ll use a sample clinical trial dataset, which is described here: (19)

“This dataset is typical of diastolic blood pressure data measured in small clinical trials in hypertension from the mid-to-late 1960s and for approximately a decade thereafter.

During this time, hypertension was more severe, the number of effective treatments was relatively small, and the definition (DBP > 95 mmHg) of essential hypertension was not as stringent as it is now (DBP > 80 mmHg) - as seen in the 1967 report from the Veterans Administration Cooperative Study Group on Antihypertensive Agents VA Study Group (1967).

In Table 3.1, diastolic blood pressure (DBP) was measured (mmHg) in the supine position at baseline (i.e., DBP) before randomization… Patients’ age and sex were recorded at baseline and represent potential covariates. The primary objective in this chapter in the analysis of this dataset is to test whether treatment A (new drug) may be effective in lowering DBP as compared to B (placebo) and to describe changes in DBP across the times at which it was measured.

So to recap, the primary investigators of this trial were interested to see whether treatment A was more effective in lowering DBP when compared to placebo.

Although we will not be using some of the scripts and project setup that I discussed in my statistical workflow post for the sake of efficiency and ease, I would urge and recommend others do so that it remains easy to catch errors. In particular, I would especially recommend using the following packages:

For the analyses in this post, see the R packages in the session info section at the end of this post, these will need to be loaded in order for the scripts below to work.

I will now simulate some of the data from this clinical trial.

# Simulate Trial Data  -------------------------------------

set.seed(1031, kind = "L'Ecuyer-CMRG",
normal.kind = "Inversion",
sample.kind = "Rejection")
BaselineDBP <- rnorm(500, 117, 2)
Group <- rbinom(500, 1, 0.5)
Age <- rnorm(500, 50, 7)
R1 <- rnorm(500, 0, 1)
R2 <- rnorm(500, 1, 0.5)
errors <- mvtnorm::rmvnorm(500, mean = c(0, 0),
sigma = matrix(c(1, 0.5, 0.5, 1),
nrow = 2, byrow = TRUE))
PostDBP <- BaselineDBP + (Age * 0.33) +
(Group * 9.7) + errors[, 1]

I have made some minor adjustments to this dataset and also generated a new variable (Z) based on the existing ones in the dataset that is strongly correlated with the outcome of interest (PostDBP).

# Explore Data  -------------------------------------

str(BP_full)
#> 'data.frame':    500 obs. of  7 variables:
#>  $PostDBP : num 142 140 129 148 135 ... #>$ BaselineDBP: num  119 116 115 119 119 ...
#>  $Group : Factor w/ 2 levels "0","1": 2 2 1 2 1 1 2 1 2 2 ... #>$ Age        : num  41.8 44.4 41.1 56.6 50.2 ...
#>  $Z : num 0.0141 -0.0629 -0.1101 0.0313 0.0495 ... #>$ R1         : num  0.523 1.021 -0.839 0.413 -0.175 ...
#>  $R2 : num 0.451 0.35 0.897 1.227 1.166 ... summary(BP_full) #> PostDBP BaselineDBP Group Age Z R1 R2 #> Min. :123.8 Min. :109.9 0:242 Min. :27.65 Min. :-1.845e-01 Min. :-3.205036 Min. :-0.6456 #> 1st Qu.:133.4 1st Qu.:115.6 1:258 1st Qu.:45.34 1st Qu.:-4.440e-02 1st Qu.:-0.732386 1st Qu.: 0.6447 #> Median :138.5 Median :116.9 Median :49.37 Median : 9.454e-05 Median :-0.000058 Median : 0.9601 #> Mean :138.1 Mean :116.9 Mean :49.73 Mean : 0.000e+00 Mean :-0.042800 Mean : 0.9620 #> 3rd Qu.:142.8 3rd Qu.:118.2 3rd Qu.:54.69 3rd Qu.: 4.562e-02 3rd Qu.: 0.552206 3rd Qu.: 1.2832 #> Max. :150.5 Max. :123.1 Max. :68.45 Max. : 2.025e-01 Max. : 3.377526 Max. : 2.4805 ztable(head(BP_full, 10)) PostDBP BaselineDBP Group Age Z R1 R2 141.67 119.47 1 41.82 0.01 0.52 0.45 139.98 115.56 1 44.40 -0.06 1.02 0.35 128.64 115.40 0 41.14 -0.11 -0.84 0.90 147.91 118.75 1 56.60 0.03 0.41 1.23 135.24 118.76 0 50.21 0.05 -0.17 1.17 135.14 118.27 0 55.33 0.01 -0.89 0.77 143.32 117.08 1 54.34 0.00 1.07 0.34 132.75 116.04 0 55.76 -0.05 -1.91 0.96 143.61 118.21 1 46.93 0.08 1.03 1.75 149.89 118.85 1 62.01 0.01 -0.94 2.48 Now, we can quickly explore the other characteristics of the dataset, and first we look at the rank correlations between the predictors and the response variable. # Data Attributes ------------------------------------- study_formula <- as.formula(PostDBP ~ .) plot(spearman2(study_formula, BP_full), cex = 1, pch = 18, col = alpha("darkred", 0.75)) abline(v = 0, col = alpha(zred, 0.75), lty = 3) Along with a correlation matrix between the variables of interest. We will not be looking at the estimates from a fitted model yet. We are mainly looking at a few characteristics of this dataset before we move on to the next step. Now that we have familiarized ourselves with this dataset, we suddenly “notice” that there are missing data (which I actually generated below using a custom function) and suppose we didn’t actually know how the missing data were generated, we might suspect or start with the assumption that the data are missing at random (MAR). I do not plan on touching what the different missing data mechanisms are (MCAR, MAR, MNAR) as described by Rubin (1978). Although it is necessary to be familiar with them to better understand the approaches used throughout this post. Some individuals have remarked that the following diagram below in Statistical Rethinking helped them grasp these concepts, so I am pasting that here as a reference. Of course, here, we do have an idea of what the missing data mechanism is, but we shall assume the role of a data analyst who has just been given a dataset with missing values in the outcome from their clinical colleague. # Generate Missing Data ------------------------------------- set.seed(1031, kind = "L'Ecuyer-CMRG", normal.kind = "Inversion", sample.kind = "Rejection") Ry <- ifelse(lesslikely() > 0, 1, 0) PostDBP[Ry == 0] <- NA BP_miss <- data.frame(PostDBP, BaselineDBP, Group, Age, Z, R1, R2) str(BP_miss) #> 'data.frame': 500 obs. of 7 variables: #>$ PostDBP    : num  142 140 129 148 135 ...
#>  $BaselineDBP: num 119 116 115 119 119 ... #>$ Group      : Factor w/ 2 levels "0","1": 2 2 1 2 1 1 2 1 2 2 ...
#>  $Age : num 41.8 44.4 41.1 56.6 50.2 ... #>$ Z          : num  0.0141 -0.0629 -0.1101 0.0313 0.0495 ...
#>  $R1 : num 0.523 1.021 -0.839 0.413 -0.175 ... #>$ R2         : num  0.451 0.35 0.897 1.227 1.166 ...

sum(is.na(BP_miss))
#> [1] 245

We will now attempt to visualize our dataset and examine what proportion of these data are missing.

missing_col <- c(zblue, zred)
suppressMessages(aggr(BP_miss, col = alpha(missing_col, 0.60),
plot = TRUE, prop = F, numbers = T))

So now, we must examine this missing dataset and its characteristics and look for any systematic differences. We also examine the proportion of missingness and the influx and outflux patterns. This is important to assess how well connectected the missing data are to the observed data, explained by van Buuren here

The influx of a variable quantifies how well its missing data connect to the observed data on other variables. The outflux of a variable quantifies how well its observed data connect to the missing data on other variables. In general, higher influx and outflux values are preferred…

Influx and outflux are summaries of the missing data pattern intended to aid in the construction of imputation models. Keeping everything else constant, variables with high influx and outflux are preferred. Realize that outflux indicates the potential (and not actual) contribution to impute other variables. A variable with high $$O_{j}$$ may turn out to be useless for imputation if it is unrelated to the incomplete variables. On the other hand, the usefulness of a highly predictive variable is severely limited by a low $$O_{j}$$.

More refined measures of usefulness are conceivable, e.g., multiplying $$O_{j}$$ by the average proportion of explained variance. Also, we could specialize to one or a few key variables to impute. Alternatively, analogous measures for $$I_{j}$$ could be useful. The further development of diagnostic summaries for the missing data pattern is a promising area for further investigation.

# Examine Flux Patterns  -------------------------------------

round(fico(BP_miss), 1)[2:6]
#> BaselineDBP       Group         Age           Z          R1
#>         0.5         0.5         0.5         0.5         0.5
ztable(flux_temp)
POBS Influx Outflux AINB AOUT FICO
PostDBP 0.51 0.45 0 1 0.00 0.00
BaselineDBP 1.00 0.00 1 0 0.08 0.49
Group 1.00 0.00 1 0 0.08 0.49
Age 1.00 0.00 1 0 0.08 0.49
Z 1.00 0.00 1 0 0.08 0.49
R1 1.00 0.00 1 0 0.08 0.49
R2 1.00 0.00 1 0 0.08 0.49

## Exploratory Analyses

Next, We start off doing some preliminary analyses to eyeball differences between the observed data and the missing data.

#> [1] 245
pobs influx outflux ainb aout fico
PostDBP 0.51 0.45 0 1 0.00 0.00
BaselineDBP 1.00 0.00 1 0 0.08 0.49
Group 1.00 0.00 1 0 0.08 0.49
Age 1.00 0.00 1 0 0.08 0.49
Z 1.00 0.00 1 0 0.08 0.49
R1 1.00 0.00 1 0 0.08 0.49
R2 1.00 0.00 1 0 0.08 0.49

Having explored these data, we fit a preliminary model quickly to these missing data using the rlm() function from the MASS package and glance at what our estimates look like.

# Exploratory Model Estimates  -------------------------------------

eda_1 <- rlm(PostDBP ~ factor(Group) + BaselineDBP +
Age + Z + R1 + R2, data = BP_miss,
method = "MM", psi = psi.huber)

ztable(summary(eda_1)[["coefficients"]])
Value Std. Error t value
(Intercept) 2.10 5.78 0.36
factor(Group)1 9.58 0.13 71.57
BaselineDBP 0.98 0.05 19.86
Age 0.34 0.01 34.33
Z -0.93 1.48 -0.62
R1 -0.03 0.07 -0.43
R2 -0.13 0.13 -0.94
ztable(performance(eda_1))
AIC BIC RMSE Sigma
749.22 777.55 1.02 1.03

Because the MASS::rlm() function will automatically delete missing observations and do a complete-case analysis, we can gauge what our estimates would look like if we assume that the data are missing completely at random (MCAR), which is almost always unrealistic.

It is almost always more realistic to assume that the data are missing at random (MAR), $$\operatorname{Pr}\left(R=0 \mid Y_{\mathrm{obs}}, Y_{\mathrm{mis}}, \psi\right)=\operatorname{Pr}\left(R=0 \mid Y_{\mathrm{obs}}, \psi\right)$$. This can often be a reasonable assumption to start off with for the primary analysis. (20)

## The Analysis Workflow

For our main analysis, we will build an imputation model to impute our missing data. We include all possible information in our imputation model, meaning all the variables in our dataset, and impute the missing data using the predictive mean matching method, and then analyze each dataset and pool the results. Our basic workflow will look like this:

We do this with a fully-adjusted model, with all relevant covariates and a reduced model with just the treatment group. This is because we want both the adjusted and unadjusted estimates. We will then compare these models using the likelihood-ratio test (21) to see whether there are any substantial differences between the fully-adjusted and reduced model.

## Monte Carlo Error Analysis

However, before we are able to move onto this step, we must determine how many imputations we will construct. To many, this may seem like a trivial and even nonsensical task when any individual can impute a large number of datasets. Indeed, it may seem intuitive that more is better. This is why we must conduct an MI power analysis.

Multiple imputation is a stochastic procedure. Each time we reimpute our data, we get different sets of imputations because of the randomness of the imputation step, and therefore we get different multiple-imputation estimates. However, we want to be able to reproduce MI results. Of course, we can always set the random-number seed to ensure reproducibility by obtaining the same imputed values. However, what if we use a different seed? Would we not want our results to be similar regardless of what seed we use? This leads us to a notion we call statistical reproducibility—we want results to be similar across repeated uses of the same imputation procedure; that is, we want to minimize the simulation error associated with our results.

To assess the level of simulation error, White, Royston, and Wood (2011) propose to use a Monte Carlo error of the MI results, defined as the standard deviation of the results across repeated runs of the same imputation procedure using the same data. The authors suggest evaluating Monte Carlo error estimates not only for parameter estimates but also for other statistics, including p-values and confidence intervals, as well as MI statistics including RVI and FMI.

The issue with this is that without giving careful thought to how many imputations may be needed for a particular study, one may run the risk of imputing too few datasets and losing information, or they may waste resources imputing a nonsensical amount that may not be necessary. Indeed, the latter seems especially trivial, however, it may often yield no advantages whatsoever, and only incur costs as many authors have argued.

The argument is that “the additional resources that would be required to create and store more than a few imputations would not be well spent” (Schafer 1997, 107), and “in most situations there is simply little advantage to producing and analyzing more than a few imputed datasets” (Schafer and Olsen 1998, 549).

Therefore, we will examine certain characteristics of our dataset with missing values and use them in an analysis to determine how many datasets we should impute to efficiently achieve our goals. We will base this off a number of characteristics such as the fraction of missing information, the proportion of missing observations, and the losses that we are willing to incur.

We first start by running a “dry” or naive imputation of our data to quickly examine some of these characteristics. This imputed dataset will not be used for our primary analysis, it is simply serving to guide us in our main imputation model. This is also why we have set the number of iterations to 0.

Once again, we will include all the variables in our dataset, and use the predictive mean matching method with approximately 5 donors.

# Dry Imputation -------------------------------------

form <- list(PostDBP ~ factor(Group) + BaselineDBP + Age + Z + R1 + R2)
pred <- make.predictorMatrix(BP_miss)

parlmice(BP_miss, method = "pmm", m = 20,
maxit = 0, cores = cores, donors = 5,
cluster.seed = 1031, formulas = form, ridge = 0,
predictorMatrix = pred, remove.collinear = TRUE,
remove.constant = TRUE, allow.na = TRUE,
printFlag = FALSE, visitSequence = "monotone") -> init_imp

head(init_imp$loggedEvents, 10) #> NULL Now that we have imputed our dataset, we may examine it for any issues before moving onto the next step of analyzing it. Although it is only a dry run, there seems to be no issues with it. We now pool our estimates using Rubin’s rules. init_res <- mice::pool(with(init_imp, rlm(PostDBP ~ factor(Group) + BaselineDBP + Age + Z + R1 + R2, method = "MM", psi = psi.huber))) init_res_sum <- summary(init_res) colnames(init_res_sum) <- c("Term", "Estimate", "SE", "Statistic", "df", "P-val") ztable(init_res_sum) Term Estimate SE Statistic df P-val (Intercept) 28.20 18.38 1.53 44.89 0.13 factor(Group)1 8.23 0.41 20.09 58.98 0.00 BaselineDBP 0.78 0.15 5.16 49.08 0.00 Age 0.28 0.04 8.00 36.72 0.00 Z 0.71 3.99 0.18 112.11 0.86 R1 0.02 0.18 0.13 136.46 0.90 R2 0.02 0.38 0.07 80.20 0.95 Now that we have our vector, we can quickly examine certain characteristics such as the fraction of missing information (FIMO), the between-imputation variance and the within-imputation variance. The reason that we focus on these particular attributes is because they are crucial for several tasks in any statistical analysis plan, such as having enough power/precision, minimum long-run coverage, etc. We use the FIMO from the dry imputation and a number of other criteria to determine how many imputations we need: • Some have suggested taking the FIMO and multiplying by 100 to obtain the number of imputed datasets needed $m \approx 100 \lambda$ • For our goals, we wish to ensure that the monte-carlo standard error from the imputations are less than 10% of the between-imputation standard error $mcse < B^{\hat{-}}_{SE} * 0.10$ • We wish to minimize the monte-carlo error that is derived from dividing the chosen number of datasets to impute from the fraction of missing information so that the monte-carlo standard error is less than 0.01 $\frac{FMI}{m} ≈ 0.01$ • We also wish to minimize the total variance so that the square-root of it is no greater than the ideal variance and the corresponding confidence interval width. $T_{m}=\left(1+\frac{\gamma_{0}}{m}\right) T_{\infty}$ • Furthermore, it is commonly advocated by missing data experts to impute the same or similar number of datasets as fractions of missing observations $\gamma_{0}*100$ We start by extracting the FMIs from our dry imputation and calculating all these numbers. # Calculate Fraction of Missing Information fmi <- max(summary(init_res, type = c("all"), conf.int = TRUE, conf.level = 0.95)["fmi"]) m <- fmi / 0.01 # FMI Imputations mcse <- sqrt(max(init_res$pooled$b) / m) # MCSE se <- max((init_res$pooled$ubar) / sqrt(500)) # SE What we find is that the fraction of missing information is nearly 50% which is very high and that in order to get a monte-carlo error of less than 0.01, we would need a minimum of at least 50 imputed datasets. We also calculate both the monte-carlo standard error and the standard error from the dry run and find that they are practically equivalent. However, we must also explore other possibilities in terms of number of imputed datasets to reduce the monte-carlo errors so we run a quick function to do that, checking the effects of a number of imputed datasets on the width of the confidence interval, which is directly tied to the monte-carlo error. Indeed, Von Hippel (2018) proposed a relationship between the fraction of missing information and the number of imputations needed to achieve a targeted CI length or monte-carlo error using a quadratic formula, $m=1+\frac{1}{2}\left(\frac{\gamma_{0}}{\operatorname{SD}\left(\sqrt{U_{\ell}}\right) \mathrm{E}\left(\sqrt{U_{\ell}}\right)}\right)^{2}$ where $$\mathrm{E}\left(\sqrt{U_{\ell}}\right)$$ and $$\mathrm{SD}\left(\sqrt{U_{\ell}}\right)$$ are the coefficient of variation (CV), summarizing the imputation variation in the SE estimates. We can graphically construct a function to depict this relationship and calculate how many imputations we will need based on some desired error and further display the distribution of this relationship by varying these parameters. In the function below, adopted from von Hippel, and van Buuren, (20) I vary both the coefficient of variation and the $$\alpha$$-level. With this function, we can now graphically examine the effects of different parameter values on the monte-carlo errors from the imputations. We can see above that as we aim for smaller monte-carlo errors, we need many more imputations to achieve our desired target. Now that I have constructed and graphed three differing relationships between monte-carlo errors and number of imputations needed to achieve those errors, and varied the $$\alpha$$-levels, we can now move onto choosing a specific number of imputations to construct. If I choose an $$\alpha$$ level of 5%, which the maximum tolerable type-I error rate, and a coefficient of variation of 5% # Estimate Number of Imputations ------------------------------------- powerimp(model = init_res, cv = 0.10, alpha = 0.05) #> [1] 32 powerimp(model = init_res, cv = 0.05, alpha = 0.05) #> [1] 124 powerimp(model = init_res, cv = 0.01, alpha = 0.05) #> [1] 3057 I need to construct approximately 124 imputations to achieve this coefficient of variation, which is what I will use for all the analyses from here on, and it is clear that it is far more reasonable than trying to impute 3000 datasets to achieve a tiny standard error. However, I want to verify my calculations by running a similar estimation command that Stata provides. The documentation says the following regarding RVI and FMI: Returning to the output, average RVI reports the average relative increase (averaged over all coefficients) in variance of the estimates because of the missing bmi values. A relative variance increase is an increase in the variance of the estimate because of the loss of information about the parameter due to nonresponse relative to the variance of the estimate with no information lost. The closer this number is to zero, the less effect missing data have on the variance of the estimate. Note that the reported RVI will be zero if you use mi estimate with the complete data or with missing data that have not been imputed. In our case, average RVI is small: 0.0312. Largest FMI reports the largest of all the FMI about coefficient estimates due to nonresponse. This number can be used to get an idea of whether the specified number of imputations is sufficient for the analysis. A rule of thumb is that M ≥ 100 × FMI provides an adequate level of reproducibility of MI analysis. In our example, the largest FMI is 0.14 and the number of imputations, 20, exceeds the required number of imputations: 14 Although I cannot get the number of imputations directly from a command, I can run the mi estimate command, and look at the overall fraction of missing information as the number of imputations increases. If my previous calculations are consistent, I should look to increase the number of imputations until I achieve an FMI of 1.24 or less: 124/100 clear #> . cl. quietly import delimited "~/Dropbox/LessLikely/static/datasets/temp.csv", numericcols(2 3 5 6 7 8) #> #> . quietly mi set mlong #> #> . quietly mi register imputed baselinedbp postdbp age z group r1 r2 #> #> . quietly mi impute chained (pmm, knn(5)) postdbp = baselinedbp age z r1 r2 group, burnin(5) add(120) rseed (1031) savetrace(trace1, replace) nomonotone #> #> . quietly mi estimate, saving(miest, replace): rreg postdbp i.group baselinedbp age z r1 r2 #> #> . mi estimate using miest, mcerror level(95) #> #> Multiple-imputation estimates Imputations = 120 #> Robust regression Number of obs = 500 #> Average RVI = 1.8029 #> Largest FMI = 0.7303 #> Complete DF = 493 #> DF adjustment: Small sample DF: min = 84.56 #> avg = 128.88 #> max = 203.85 #> Model F test: Equal FMI F( 6, 384.2) = 1110.26 #> Prob > F = 0.0000 #> #> ------------------------------------------------------------------------------ #> postdbp | Coefficient Std. err. t P>|t| [95% conf. interval] #> -------------+---------------------------------------------------------------- #> 1.group | 10.44074 .1520769 68.65 0.000 10.13944 10.74203 #> | .0109798 .0055144 2.51 0.000 .0160005 .0153271 #> | #> baselinedbp | .9523791 .0543838 17.51 0.000 .8446284 1.06013 #> | .0039314 .0024113 0.80 0.000 .0067288 .005777 #> | #> age | .3235129 .0085091 38.02 0.000 .3067357 .34029 #> | .0004972 .0002392 1.09 0.000 .0007452 .0006326 #> | #> z | -2.462747 1.615398 -1.52 0.130 -5.65957 .7340747 #> | .1129886 .0628 0.08 0.021 .185614 .1527596 #> | #> r1 | -.3478846 .0880845 -3.95 0.000 -.5230333 -.1727359 #> | .0068187 .0048793 0.24 0.000 .0113975 .0126994 #> | #> r2 | -.161272 .1375066 -1.17 0.243 -.432985 .1104409 #> | .0091064 .0055671 0.09 0.036 .0126149 .0160835 #> | #> _cons | 6.669336 6.373256 1.05 0.298 -5.956795 19.29547 #> | .4596082 .2802484 0.08 0.037 .6777396 .7795988 #> ------------------------------------------------------------------------------ #> Note: Values displayed beneath estimates are Monte Carlo error estimates. #> #> . mi estimate, saving(mist, replace) vartable mcerror level(95) nocitable: rreg postdbp i.group baselinedbp age z r1 r2 #> #> Multiple-imputation estimates Imputations = 120 #> Robust regression #> #> Variance information #> ------------------------------------------------------------------------------ #> | Imputation variance Relative #> | Within Between Total RVI FMI efficiency #> -------------+---------------------------------------------------------------- #> 1.group | .00854 .014467 .023127 1.70812 .635644 .994731 #> | .000042 .001657 .001673 .195715 .027105 .000225 #> | #> baselinedbp | .001087 .001855 .002958 1.71986 .637245 .994718 #> | 5.4e-06 .000257 .00026 .236739 .033065 .000275 #> | #> age | .000042 .00003 .000072 .70416 .416521 .996541 #> | 2.1e-07 4.0e-06 4.1e-06 .095026 .03363 .000281 #> | #> z | 1.06477 1.53197 2.60951 1.45077 .596664 .995052 #> | .005257 .19959 .201823 .188539 .032247 .000268 #> | #> r1 | .002133 .005579 .007759 2.63749 .730277 .993951 #> | .000011 .000845 .000853 .398903 .031103 .000258 #> | #> r2 | .008874 .009951 .018908 1.13072 .53497 .995562 #> | .000044 .001497 .001517 .169478 .039089 .000326 #> | #> _cons | 15.0584 25.3488 40.6184 1.69739 .634168 .994743 #> | .07434 3.50192 3.54812 .232936 .033051 .000275 #> ------------------------------------------------------------------------------ #> Note: Values displayed beneath estimates are Monte Carlo error estimates. #> #> #> . mi estimate using miest, mcerror level(95) #> #> Multiple-imputation estimates Imputations = 120 #> Robust regression Number of obs = 500 #> Average RVI = 1.8029 #> Largest FMI = 0.7303 #> Complete DF = 493 #> DF adjustment: Small sample DF: min = 84.56 #> avg = 128.88 #> max = 203.85 #> Model F test: Equal FMI F( 6, 384.2) = 1110.26 #> Prob > F = 0.0000 #> #> ------------------------------------------------------------------------------ #> postdbp | Coefficient Std. err. t P>|t| [95% conf. interval] #> -------------+---------------------------------------------------------------- #> 1.group | 10.44074 .1520769 68.65 0.000 10.13944 10.74203 #> | .0109798 .0055144 2.51 0.000 .0160005 .0153271 #> | #> baselinedbp | .9523791 .0543838 17.51 0.000 .8446284 1.06013 #> | .0039314 .0024113 0.80 0.000 .0067288 .005777 #> | #> age | .3235129 .0085091 38.02 0.000 .3067357 .34029 #> | .0004972 .0002392 1.09 0.000 .0007452 .0006326 #> | #> z | -2.462747 1.615398 -1.52 0.130 -5.65957 .7340747 #> | .1129886 .0628 0.08 0.021 .185614 .1527596 #> | #> r1 | -.3478846 .0880845 -3.95 0.000 -.5230333 -.1727359 #> | .0068187 .0048793 0.24 0.000 .0113975 .0126994 #> | #> r2 | -.161272 .1375066 -1.17 0.243 -.432985 .1104409 #> | .0091064 .0055671 0.09 0.036 .0126149 .0160835 #> | #> _cons | 6.669336 6.373256 1.05 0.298 -5.956795 19.29547 #> | .4596082 .2802484 0.08 0.037 .6777396 .7795988 #> ------------------------------------------------------------------------------ #> Note: Values displayed beneath estimates are Monte Carlo error estimates. ## The Primary Analysis For the primary analysis, I will use the predictive-mean matching method and specify 5 donors (based on a number of simulation studies that have examined the effect of various donors specified) (22), and fit a robust regression that includes all the variables within the dataset. Our imputation model will be $Y^{\operatorname{Post}} = \beta_{0} + \beta_{1}^{\operatorname{Group}} + \beta_{2}^{\operatorname{Base}} + \beta_{3}^{\operatorname{Age}} + \beta_{4}^{\operatorname{Z}} + \beta_{5}^{\operatorname{R1}} + \beta_{6}^{\operatorname{R2}} + \epsilon$ So we will now impute our datasets. # Impute Missing Data ------------------------------------- form <- list(PostDBP ~ factor(Group) + BaselineDBP + Age + Z + R1 + R2) pred <- make.predictorMatrix(BP_miss) parlmice(BP_miss, method = "pmm", predictorMatrix = pred, m = 120, maxit = 5, cluster.seed = 1031, n.core = cores, n.imp.core = (m / cores), formulas = form, ridge = 0, cl.type = "FORK", donors = 5, remove.collinear = TRUE, remove.constant = TRUE, allow.na = TRUE, printFlag = FALSE, visitSequence = "monotone") -> imp1 head(imp1$loggedEvents, 10)
#> NULL

Now that we have imputed our datasets, we may examine them for any issues before moving onto the next step of analyzing it.

Our imputed datasets seem to look fine so far. We now fit our models to each of these datasets and combine them using Rubin’s rules.

# Primary Analysis -------------------------------------

analysis1 <- with(imp1, rlm(PostDBP ~ factor(Group)  +
BaselineDBP + Age + Z + R1 + R2,
method = "MM", psi = psi.huber))
analysis2 <- with(imp1, rlm(PostDBP ~ factor(Group) ,
method = "MM", psi = psi.huber))

anova(analysis1, analysis2,
method = "D3", use = "LR")
#>    test statistic df1     df2 dfcom p.value      riv
#>  1 ~~ 2         0   5 317.823   493       1 219.4285

result1 <- mice::pool(analysis1)

results1_sum <- summary(result1)

colnames(results1_sum) <- c("Term", "Estimate", "SE",
"Statistic", "df", "P-val")
ztable(results1_sum)
Term Estimate SE Statistic df P-val
(Intercept) 1.48 5.52 0.27 137.47 0.79
factor(Group)1 9.52 0.12 76.85 183.00 0.00
BaselineDBP 0.99 0.05 20.90 136.15 0.00
Age 0.33 0.01 31.75 112.55 0.00
Z -1.27 1.41 -0.90 170.17 0.37
R1 -0.02 0.08 -0.24 104.56 0.81
R2 -0.13 0.14 -0.92 116.71 0.36

While we have our primary analysis estimates, we also continue to examine our imputations for any anomalies.

We’ve done our primary analysis, in which we imputed the missing data under the assumption of MAR, which is often a reasonable assumption to start with. Unfortunately, we cannot verify whether or not this assumption is true, but we can vary this missing data assumption and assume that the data are missing not at random (MNAR) $$\operatorname{Pr}\left(R=0 \mid Y_{\mathrm{obs}}, Y_{\mathrm{mis}}, \psi\right)$$, and that in the individuals with missing data, we can assume different numbers than those with complete data. Our likelihood-ratio test also suggests that there’s very little difference between the full model and the reduced model, so we generally will go with the full model.

## SA I: Controlled MI

Now suppose we wish to handle these missing data under the assumption of MNAR, we would now be conducting a sensitivity analysis because we are still targeting the same estimand, but only varying the assumptions. The question / target remain the same. To handle missing data that we assume are MNAR, there are a number of different and complex approaches.

We start off with the $$\delta$$-adjustment, controlled multiple imputation method, (23) in which we assume that the group with missing observations differs systematically from the group with complete observations by a certain quantity. We produce a range of these quantities and add or subtract them from the imputed values and then conduct our analysis on this adjusted, imputed dataset. Here I will make the assumption that the average PostDBP estimate from the missing value group differs from at least 0 mmHg to at most 20 mmHg from the group with no missing observations.

# Controlled MI  -------------------------------------

ini <- mice(BP_miss, method = "pmm", maxit = 0,
predictorMatrix = pred)

ini$method["BaselineDBP"] <- "" ini$method["PostDBP"] <- "pmm"
ini$method["Age"] <- "" ini$method["R1"] <- ""
ini$method["Group"] <- "" ini$method["R2"] <- ""
ini$method["Z"] <- "" meth <- ini$method

delta <- seq(-20, 20, 5)

imp.all <- vector("list", length(delta))
post <- ini$post for (i in 1:length(delta)) { d <- delta[i] cmd <- paste("imp[[j]][, i] <- imp[[j]][, i] +", d) post["PostDBP"] <- cmd imp <- mice(BP_miss, method = meth, predictorMatrix = pred, m = 5, maxit = 1, donors = 5, cores = cores, ridge = 0, formulas = form, remove.collinear = TRUE, remove.constant = TRUE, allow.na = TRUE, post = post, printFlag = FALSE) imp.all[[i]] <- imp } Now that we have our imputed datasets that are modified by $$\delta$$-adjustments, we can fit our models to each imputed dataset and combine the estimates using Rubin’s rules. # Sensitivity Analysis 1 ------------------------------------- output <- sapply(imp.all, function(x) mice::pool(with(x, rlm(PostDBP ~ factor(Group) + BaselineDBP + Age + Z + R1 + R2, method = "MM", psi = psi.huber))), simplify = "array", USE.NAMES = TRUE) a <- (as.data.frame(t(output)))$pooled
r <- array(rep(NA, length(delta) * 7),
dim = c(length(delta), 7),
dimnames = list(c(delta),
c("Intercept", "Group",
"BaselineDBP", "Age",
"Z", "R1", "R2")))
for (i in 1:length(delta)) {
r[i, ] <- cbind(a[[i]][["estimate"]])
}

r <- as.data.frame(r)
r <- data.frame(Delta = row.names(r), r, row.names = NULL)
ztable(r)
Delta Intercept Group BaselineDBP Age Z R1 R2
-20 -128.61 11.91 2.03 0.30 -3.40 0.15 -1.28
-15 -64.87 10.68 1.50 0.31 -2.08 0.08 -0.69
-10 -35.64 10.14 1.27 0.32 -1.51 0.05 -0.49
-5 -15.15 9.78 1.11 0.32 -1.09 0.03 -0.37
0 2.07 9.47 0.98 0.33 -0.80 -0.01 -0.23
5 21.74 9.12 0.83 0.33 -0.53 -0.02 -0.13
10 42.45 8.76 0.68 0.34 -0.11 -0.04 -0.01
15 72.14 8.22 0.44 0.34 0.45 -0.07 0.20
20 134.23 7.06 -0.08 0.35 1.71 -0.13 0.78

The table above gives the various $$\delta$$’s (the first column with values ranging from -30 - 30 in increments of 5) that were added to the multiply imputed datasets, and the corresponding estimates from the models fit to those datasets. So now we’ve conducted our first sensitivity analysis using the $$\delta$$-adjustment, controlled multiple imputation method. This form of sensitivity analysis is also typically what is used and preferred by regulators. (23)

Regulators prefer simple methods that impute the missing outcomes under MAR, and then add an adjustment δ to the imputes, while varying δ over a plausible range and independently for each treatment group (Permutt 2016).

The most interesting scenarios will be those where the difference between the δ’s correspond to the size of the treatment effect in the completers. Contours of the p-values may be plotted on a graph as a function of the δ’s to assist in a tipping-point analysis (Liublinska and Rubin 2014).

And as expected, lower $$\delta$$ values gave us results that were consistent with the primary analysis, while the upper end of the extreme $$\delta$$ values gave us estimates that were substantially off the mark.

Next, we quickly examine the state of a random sample of these imputations to make sure they’re reliable.

#> Error in tmp[subset]: object of type 'closure' is not subsettable

Now we move onto our next and final sensitivity analysis.

## SA II: Selection Models

To further explore the MNAR assumption, we could also use a Heckman selection model $$P(Y,R)=P(Y)P(R|Y)$$, as described by van Buuren here (20)

The selection model multiplies the marginal distribution $$P(Y)$$ in the population with the response weights $$P(R|Y)$$. Both $$P(Y)$$ and $$P(R|Y)$$ are unknown, and must be specified by the user. The model where $$P(Y)$$ is normal and where $$P(R|Y)$$ is a probit model is known as the Heckman model. This model is widely used in economics to correct for selection bias.

We can implement this in R using the miceMNAR package, which is an extension to the mice R package.

# Selection Models  -------------------------------------

JointModelEq <- generate_JointModelEq(data = BP_miss,
varMNAR = "PostDBP")

JointModelEq[,"PostDBP_var_sel"] <- c(0, 1, 1, 1, 1, 1, 1)
JointModelEq[,"PostDBP_var_out"] <- c(0, 1, 1, 1, 0, 1, 1)

arg <- MNARargument(data = BP_miss, varMNAR = "PostDBP",
JointModelEq = JointModelEq)

imp2 <- mice(data = arg$data_mod, method = "pmm", seed = 1031, predictorMatrix = arg$predictorMatrix,
JointModelEq = arg$JointModelEq, control = arg$control,
remove.collinear = TRUE,
remove.constant = FALSE, allow.na = FALSE,
printFlag = FALSE, visitSequence = "monotone")

head(imp2$loggedEvents, 10) #> NULL imp2$data$Group <- as.factor(imp2$data$Group) #> Error in [.data.frame(r, , yvar): undefined columns selected Now we fit our models to these new datasets and pool them using Rubin’s rules. # Sensitivity Analysis 2 ------------------------------------- analysis3 <- with(imp2, rlm(PostDBP ~ factor(Group) + BaselineDBP + Age + Z + R1 + R2, method = "MM", psi = psi.huber)) result3 <- mice::pool(analysis3) results3_sum <- summary(result3) colnames(results3_sum) <- c("Term", "Estimate", "SE", "Statistic", "df", "P-val") ztable(results3_sum) Term Estimate SE Statistic df P-val (Intercept) -0.46 6.28 -0.07 11.29 0.94 factor(Group)1 9.55 0.14 66.79 14.03 0.00 BaselineDBP 1.00 0.05 18.59 11.14 0.00 Age 0.33 0.01 33.98 17.10 0.00 Z -1.71 1.96 -0.87 8.24 0.41 R1 -0.01 0.07 -0.23 28.13 0.82 R2 -0.14 0.17 -0.78 8.15 0.46 Now we examine our imputed values. so now we have completed our second sensitivity analysis. The results from both the primary analysis, the first sensitivity analysis, and the second seem somewhat consistent. However, with the first sensitivity analysis where we applied $$\delta$$-adjustments, as the adjustments became larger, the coefficients changed drastically. However, such large $$\delta$$-adjustments are not realistic and those that were smaller led to coefficients that were closer to the other two analyses. ## Supplementary Analyses The following analyses are what I mostly consider to be supplementary analyses that may further investigate some violations of assumptions but often go far beyond what would be necessary for a principled sensitivity analysis that would typically accompany a primary analysis. First, I like to check that the results from my analyses are consistent across different statistical software So I may check to see that the results from my primary analysis from R are also similar to the results from another statistical software suite like Stata. So in Stata, I would likely run something similar to the following in order to mimic the primary analysis in R. import delimited "~/Dropbox/LessLikely/static/datasets/temp.csv", numericcols(2 3 5 6 7 8) #> . import delimited "~/Dropbox/LessLikely/static/datasets/temp.csv", numericcols(2 3 5 6 7(encoding automatically selected: ISO-8859-2) #> (8 vars, 500 obs) #> #> . #> . summarize #> #> Variable | Obs Mean Std. dev. Min Max #> -------------+--------------------------------------------------------- #> v1 | 500 250.5 144.4818 1 500 #> postdbp | 209 135.0179 4.550848 125.9651 149.47 #> baselinedbp | 500 116.9717 1.961848 110.615 123.7551 #> group | 500 .508 .5004367 0 1 #> age | 500 49.4683 7.08616 26.96167 70.72639 #> -------------+--------------------------------------------------------- #> z | 500 -9.29e-11 .0626851 -.209414 .2471141 #> r1 | 500 -.0715254 .9988079 -3.568279 3.036855 #> r2 | 500 .9799438 .4913294 -.6787649 2.356054 #> #> . #> . mi set mlong #> #> . #> . mi register imputed baselinedbp postdbp age z group r1 r2 #> (291 m=0 obs now marked as incomplete) #> #> . #> . mi impute chained (pmm, knn(5)) postdbp = baselinedbp age z r1 r2 group, burnin(20) add(100) rseed (1031) savetrace(trace1, replace) nomonotone #> note: missing-value pattern is monotone. #> #> Conditional models: #> postdbp: pmm postdbp baselinedbp age z r1 r2 group , knn(5) #> #> Performing chained iterations ... #> #> Multivariate imputation Imputations = 100 #> Chained equations added = 100 #> Imputed: m=1 through m=100 updated = 0 #> #> Initialization: monotone Iterations = 2000 #> burn-in = 20 #> #> postdbp: predictive mean matching #> #> ------------------------------------------------------------------ #> | Observations per m #> |---------------------------------------------- #> Variable | Complete Incomplete Imputed | Total #> -------------------+-----------------------------------+---------- #> postdbp | 209 291 291 | 500 #> ------------------------------------------------------------------ #> (Complete + Incomplete = Total; Imputed is the minimum across m #> of the number of filled-in observations.) #> #> . #> . mi estimate, mcerror: rreg postdbp i.group baselinedbp age z r1 r2 #> #> Multiple-imputation estimates Imputations = 100 #> Robust regression Number of obs = 500 #> Average RVI = 1.6877 #> Largest FMI = 0.7159 #> Complete DF = 493 #> DF adjustment: Small sample DF: min = 82.46 #> avg = 127.69 #> max = 191.63 #> Model F test: Equal FMI F( 6, 374.8) = 1171.55 #> Prob > F = 0.0000 #> #> ------------------------------------------------------------------------------ #> postdbp | Coefficient Std. err. t P>|t| [95% conf. interval] #> -------------+---------------------------------------------------------------- #> 1.group | 10.46 .1504111 69.54 0.000 10.16183 10.75818 #> | .0118239 .0053087 2.46 0.000 .0144955 .0173849 #> | #> baselinedbp | .9542017 .0509793 18.72 0.000 .8532671 1.055136 #> | .0038745 .002412 0.91 0.000 .0063144 .0061611 #> | #> age | .3229139 .0085025 37.98 0.000 .3061434 .3396845 #> | .0005449 .000226 1.02 0.000 .0007102 .0007091 #> | #> z | -2.622068 1.511717 -1.73 0.085 -5.611361 .3672254 #> | .1101344 .054734 0.09 0.016 .1620533 .1500265 #> | #> r1 | -.360883 .0856236 -4.21 0.000 -.5312012 -.1905648 #> | .0071805 .0051837 0.29 0.000 .011663 .0138291 #> | #> r2 | -.1834333 .1392627 -1.32 0.190 -.4588711 .0920045 #> | .0102243 .0053406 0.09 0.030 .0145569 .0151887 #> | #> _cons | 6.511102 5.96798 1.09 0.277 -5.30351 18.32571 #> | .4518723 .2785653 0.09 0.040 .7162938 .7298489 #> ------------------------------------------------------------------------------ #> Note: Values displayed beneath estimates are Monte Carlo error estimates. #> #> . #> . quietly mi passive: generate byte imputed = _mi_miss #> #> . #> . replace imputed = 1 if imputed==. #> (0 real changes made) #> #> . #> . generate float postdbp_Delta_A1=postdbp #> (291 missing values generated) #> #> . #> . replace postdbp_Delta_A1=postdbp + 5 if imputed==1 #> (29,100 real changes made) #> #> . #> . mi estimate, vartable mcerror nocitable: rreg postdbp_Delta_A1 i.group baselinedbp age z r1 r2 #> #> Multiple-imputation estimates Imputations = 100 #> Robust regression #> #> Variance information #> ------------------------------------------------------------------------------ #> | Imputation variance Relative #> | Within Between Total RVI FMI efficiency #> -------------+---------------------------------------------------------------- #> 1.group | .030148 .012176 .042445 .407904 .292089 .997088 #> | .000113 .001467 .001491 .049092 .025185 .000253 #> | #> baselinedbp | .003839 .001444 .005298 .380054 .277606 .997232 #> | .000014 .000224 .000228 .058973 .031792 .000319 #> | #> age | .00015 .000039 .000189 .260187 .207988 .997924 #> | 5.6e-07 4.9e-06 5.0e-06 .032904 .021009 .000211 #> | #> z | 3.75887 1.19498 4.9658 .321089 .244932 .997557 #> | .014082 .153935 .156395 .041361 .024117 .000242 #> | #> r1 | .00753 .005204 .012786 .698064 .414725 .99587 #> | .000028 .000851 .000857 .114422 .041155 .000412 #> | #> r2 | .031327 .010565 .041997 .340606 .256064 .997446 #> | .000117 .001513 .001548 .04863 .027554 .000277 #> | #> _cons | 53.1592 19.7127 73.069 .374531 .274664 .997261 #> | .199159 3.03779 3.08237 .057689 .03133 .000315 #> ------------------------------------------------------------------------------ #> Note: Values displayed beneath estimates are Monte Carlo error estimates. As you can see from above, I imported the missing data I generated in R into Stata, and then used it to multiply impute the datasets using the predictive mean matching method with 5 donors. I chose the exact same number of variables to include in the imputation model as I did with the primary and sensitivity analyses. Just like the primary analysis, I also used robust regression, rreg in Stata, to analyze the datasets. The numbers seem to be fairly consistent with those from the primary analysis and the some of the results from the sensitivity analyses. I will now conduct another supplementary analysis, in which I fit a Bayesian regression model using a t-distribution so that it is analogous to the other robust regression models I have utilized so far, and I will specify a weakly informative prior. I will fit one model using the brm_multiple() function which will fit the model to each of the imputed datasets from before, while I fit another Bayesian model which will impute values during the model fitting process and compare the results. # Quantile Regression -------------------------- implist <- mitml::mids2mitml.list(imp1) vcov.rq <- function(object, ...) summary(object, se = "nid", covariance = TRUE)$cov

fit1 <- with(implist, rq(PostDBP ~ factor(Group) + BaselineDBP +
Age + Z + R1 + R2, tau = 0.5))
mitml::testEstimates(fit1)
#>
#> Call:
#>
#> mitml::testEstimates(model = fit1)
#>
#> Final parameter estimates and inferences obtained from 64 imputed data sets.
#>
#>                 Estimate Std.Error   t.value        df   P(>|t|)       RIV       FMI
#> (Intercept)        1.065     6.672     0.160   259.554     0.873     0.971     0.497
#> factor(Group)1     9.422     0.148    63.538   449.373     0.000     0.599     0.377
#> BaselineDBP        0.988     0.057    17.323   256.967     0.000     0.981     0.499
#> Age                0.335     0.011    29.672   281.245     0.000     0.899     0.477
#> Z                 -1.256     1.754    -0.716   310.333     0.475     0.820     0.454
#> R1                -0.035     0.093    -0.377   194.477     0.707     1.321     0.574
#> R2                -0.120     0.166    -0.726   210.404     0.469     1.208     0.551
#>
#> Unadjusted hypothesis test as appropriate in larger samples.

confint(mitml::testEstimates(fit1))
#>                      2.5 %     97.5 %
#> (Intercept)    -12.0729803 14.2025254
#> factor(Group)1   9.1309200  9.7137920
#> BaselineDBP      0.8753388  1.0998747
#> Age              0.3129159  0.3573836
#> Z               -4.7067800  2.1956221
#> R1              -0.2195606  0.1491107
#> R2              -0.4464756  0.2060832

# Bayesian Regression  --------------------------

BP_miss$Group <- as.factor(BP_miss$Group)

brm_form2 <- bf(PostDBP | mi() ~ Group  + BaselineDBP +
Age + Z + R1 + R2)

prior1 <- c(prior(normal(0, 100), class = Intercept),
prior(normal(0, 100), class = b),
prior(normal(0, 100), class = sigma))

b_2 <- brm(brm_form2, data = BP_miss, prior = prior1,
silent = 2, family = student(), threads = 8,
iter = 6000, refresh = 0, backend = "cmdstanr",
warmup = 2000, chains = 2,
cores = cores, seed = 1031)
#> Running MCMC with 2 chains, at most 8 in parallel, with 8 thread(s) per chain...
#>
#> Chain 2 finished in 41.2 seconds.
#> Chain 1 finished in 50.9 seconds.
#>
#> Both chains finished successfully.
#> Mean chain execution time: 46.1 seconds.
#> Total execution time: 51.0 seconds.

(post <- summary(b_2))
#>  Family: student
#>   Links: mu = identity; sigma = identity; nu = identity
#> Formula: PostDBP | mi() ~ Group + BaselineDBP + Age + Z + R1 + R2
#>    Data: BP_miss (Number of observations: 500)
#>   Draws: 2 chains, each with iter = 6000; warmup = 2000; thin = 1;
#>          total post-warmup draws = 8000
#>
#> Population-Level Effects:
#>             Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
#> Intercept       1.31      5.68    -9.69    12.27 1.00     4703     5621
#> Group1          9.58      0.13     9.33     9.84 1.00     6048     5994
#> BaselineDBP     0.99      0.05     0.89     1.08 1.00     4696     5683
#> Age             0.33      0.01     0.31     0.35 1.00     5906     5532
#> Z              -0.99      1.49    -3.89     1.98 1.00     5492     5620
#> R1             -0.02      0.07    -0.17     0.12 1.00     5909     5905
#> R2             -0.14      0.13    -0.40     0.12 1.00     6208     6332
#>
#> Family Specific Parameters:
#>       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
#> sigma     0.99      0.05     0.89     1.10 1.00     4290     4827
#> nu       25.87     13.86     8.21    60.24 1.00     7927     5486
#>
#> Draws were sampled using sample(hmc). For each parameter, Bulk_ESS
#> and Tail_ESS are effective sample size measures, and Rhat is the potential
#> scale reduction factor on split chains (at convergence, Rhat = 1).

We can also compare the results we get from brms/Stan with MCMCpack.

posterior_95 <- MCMCquantreg(PostDBP ~ factor(Group)  + BaselineDBP +
Age + Z + R1 + R2, data = BP_miss,
tau = 0.95, mcmc = 2000, chains = 4,
thin = 10, seed = 1031)
summary(posterior_95)
#>
#> Iterations = 1001:2991
#> Thinning interval = 10
#> Number of chains = 1
#> Sample size per chain = 200
#>
#> 1. Empirical mean and standard deviation for each variable,
#>    plus standard error of the mean:
#>
#>                     Mean       SD Naive SE Time-series SE
#> (Intercept)     1.092818 18.29735  1.29382        1.29382
#> factor(Group)1  9.599438  0.41814  0.02957        0.03144
#> BaselineDBP     1.016580  0.15732  0.01112        0.01112
#> Age             0.308995  0.03593  0.00254        0.00254
#> Z              -3.208569  5.15381  0.36443        0.42299
#> R1             -0.003724  0.27634  0.01954        0.02397
#> R2             -0.420729  0.40282  0.02848        0.02848
#>
#> 2. Quantiles for each variable:
#>
#>                    2.5%      25%      50%     75%   97.5%
#> (Intercept)    -32.4151 -10.5904  0.80498 13.5220 33.8495
#> factor(Group)1   8.8901   9.3059  9.57191  9.8786 10.3679
#> BaselineDBP      0.7370   0.9224  1.01574  1.1246  1.3019
#> Age              0.2431   0.2841  0.30652  0.3325  0.3840
#> Z              -13.2868  -6.7707 -2.99166  0.3965  5.9651
#> R1              -0.5828  -0.1853  0.01808  0.1733  0.5032
#> R2              -1.2588  -0.6629 -0.35911 -0.1548  0.3317

plot(b_2)[1:8]

#> [[1]]

#>
#> [[2]]

#>
#> [[3]]
#> NULL
#>
#> [[4]]
#> NULL
#>
#> [[5]]
#> NULL
#>
#> [[6]]
#> NULL
#>
#> [[7]]
#> NULL
#>
#> [[8]]
#> NULL

Next, we examine the results from the Bayesian model in which values were imputed during model fitting.

posterior_summary(b_2)[1:8]
(post <- posterior_samples(b_2))

They seem to be similar to the results we’ve been seeing so far.

Indeed, it is no surprise that the Bayesian model with a weakly dispersed prior would give similar results to the base rlm() function. We now look at robust methods and resampling and the results they give.

Although this is currently under the supplementary analysis section, I would very much consider these next two analyses to be sensitivity analyses. The first method utilized here is known as the random indicator method, and it is an experimental method that essentially combines a selection model and a pattern-mixture model and is designed to handle data that are missing not at random. As van Buuren writes (20)

The random indicator method (Jolani 2012) (24) is an experimental iterative method that redraws the missing data indicator under a selection model, and imputes the missing data under a pattern-mixture model, with the objective of estimating δ from the data under relaxed assumptions. Initial simulation results look promising. The algorithm is available as the ri method in mice.

The script below utilizes this method (random indicator), but bootstraps the missing dataset before the imputation procedure, and then after multiply imputing the datasets, obtains estimates using maximum likelihood multiple (MLMI) imputation, an alternative and more efficient method to the more popular posterior draw multiple imputation (PDMI). More about MLMI can be found here. (25)

# Bootstrap Inference -------------------------------------

bootMice(BP_miss, method = "ri", maxit = 5,
predictorMatrix = pred, nBoot = 96, nImp = 2,
seed = 1031, formulas = form, nCores = cores,
ridge = 0, remove.collinear = TRUE,
remove.constant = TRUE, allow.na = TRUE,
printFlag = FALSE, visitSequence = "monotone") -> imps

analyseImp <- function(inputData) {
return(coef(rlm(PostDBP ~ factor(Group)  + BaselineDBP + Age + Z + R1 + R2,
data = inputData, method = "MM", psi = psi.huber)))
}
analyseImp1 <- function(inputData) {
return(coef(rq(PostDBP ~ factor(Group)  + BaselineDBP + Age + Z + R1 + R2,
data = inputData, tau = 0.5)))
}
ests <- bootImputeAnalyse(imps, analyseImp, quiet = TRUE)
bootImputeAnalyse(imps, analyseImp1, quiet = TRUE)
#> $ests #> [1] 1.87235309 9.52722705 0.97844452 0.34025534 -0.85341001 -0.03313657 -0.12312542 #> #>$var
#> [1] 2.447209e+01 2.100347e-02 1.824153e-03 7.306627e-05 1.949662e+00 8.342981e-03 1.529989e-02
#>
#> $ci #> [,1] [,2] #> [1,] -8.2321159 11.9768221 #> [2,] 9.2353562 9.8190979 #> [3,] 0.8914535 1.0654356 #> [4,] 0.3228059 0.3577048 #> [5,] -3.6811141 1.9742941 #> [6,] -0.2163239 0.1500508 #> [7,] -0.3747727 0.1285219 #> #>$df
#> [1] 29.89450 45.13748 32.06210 30.31695 37.58058 53.21587 33.02585
ests <- as.data.frame(ests)
colnames(ests) <- c("Estimate", "Var",
"CI.ll", "CI.ul", "df")
rownames(ests) <- c("Intercept", "Group", "BaselineDBP",
"Age", "Z", "R1", "R2")
ztable(ests)
Estimate Var CI.ll CI.ul df
Intercept 2.30 18.69 -6.48 11.08 35.03
Group 9.56 0.02 9.28 9.84 56.28
BaselineDBP 0.98 0.00 0.90 1.05 37.80
Age 0.34 0.00 0.32 0.36 44.91
Z -0.74 1.82 -3.45 1.97 52.39
R1 -0.02 0.01 -0.18 0.15 54.77
R2 -0.14 0.02 -0.40 0.12 53.43

For our second supplementary analysis, We will utilize the GAMLSS method, which is an extension of the generalized linear model and the generalized additive model. For the vast majority of the imputations we have conducted, we have drawn values from an assumed normal distribution. Thus, GAMLSS allows us to be far more flexible with our assumptions. However, it is also worth noting that general normal imputations are robust against violations of the assumption of normality

In general, normal imputations appear to be robust against violations of normality. Demirtas, Freels, and Yucel (2008) found that flatness of the density, heavy tails, non-zero peakedness, skewness and multimodality do not appear to hamper the good performance of multiple imputation for the mean structure in samples n > 400, even for high percentages (75%) of missing data in one variable. The variance parameter is more critical though, and could be off-target in smaller samples.

Regardless, we will be using the GAMLSS method and also imputing from a t-distribution, which is much more robust to outliers.

gamt <- mice(BP_miss, method = "gamlssTF",
predictorMatrix = pred,
m = 5, maxit = 1, seed = 1031,
cores = cores, formulas = form, ridge = 0,
remove.collinear = TRUE, remove.constant = TRUE,
allow.na = TRUE, printFlag = FALSE)

gt1 <- with(gamt, rlm(PostDBP ~ factor(Group)  +
BaselineDBP + Age +
Z + R1 + R2))

resultsgt <- mice::pool(gt1)
ztable(summary(resultsgt))
term estimate std.error statistic df p.value
(Intercept) 3.71 5.00 0.74 17.32 0.47
factor(Group)1 9.52 0.13 76.02 15.37 0.00
BaselineDBP 0.97 0.04 23.48 20.24 0.00
Age 0.32 0.01 35.44 14.81 0.00
Z -0.62 1.39 -0.44 15.92 0.66
R1 -0.02 0.07 -0.34 13.12 0.74
R2 -0.14 0.10 -1.35 42.09 0.19

Having done that, I shall try one more supplementary analysis which piqued my curiosity. I recently came across a few papers that concluded that random forests outperformed other imputation algorithms in recovering the true parameter values.

Generally, imputation algorithms are evaluated based on a general simulation design which often resembles the following:

\begin{algorithm}
\caption{algorithm}
\begin{algorithmic}
\PROCEDURE{.algorithm}{$A, p, r$}
\IF{$p < r$}
\STATE Impute initially missing data by taking a random draw from the observed data.
\ENDIF
\ENDPROCEDURE
\PROCEDURE{Repeatedly, for t = 1,..., T:}{$A, p, r$}
\STATE Estimate ^at in the model in equation (2) from the current completed data, and draw a
random value _at from its posterior distribution.
\STATE Calculate the propensity scores ^pt given the drawn value _at
t into the imputation model as an additional predictor.
\STATE(d). Estimate the parameters of the imputation model f ðY t jX; Yt ; ^p1
t Þ only for Y t;obs .
\STATE(e). Draw a random value of the parameters in (d) from their posterior distributions.
\STATE(f). Impute Y t;mis by using the drawn value in (e) and add an appropriate amount of noise.
\ENDPROCEDURE
\PROCEDURE{Return to step 2 to iterate the algorithm a small number of times, say 10 or 20.}
\end{algorithmic}
\end{algorithm}

1. Sampling mechanism only. The basic simulation steps are: choose $$Q$$, take samples $$Y^{(s)}$$, fit the complete-data model, estimate $$\hat{Q}^{(s)}$$ and $$U^{(s)}$$ and calculate the outcomes aggregated over $$s$$.

1. Sampling and missing data mechanisms combined. The basic simulation steps are: choose $$Q$$, take samples $$Y^{(s)}$$, generate incomplete data $$Y_{\mathrm{obs}}^{(s, t)}$$, impute, estimate $$\hat{Q}^{(s, t)}$$ and $$T^{(s, t)}$$ and calculate outcomes aggregated over $$s$$ and $$t$$.

1. Missing data mechanism only. The basic simulation steps are: choose $$(\hat{Q}, U)$$, generate incomplete data $$Y_{o b s}^{(t)}$$, impute, estimate $$(Q, U)^{(t)}$$ and $$B^{(t)}$$ and calculate outcomes aggregated over $$t$$.

These designs generally lead to evaluating a number of metrics which include the following:

• Raw Bias: where $$RB$$ is bias and $$Q$$ is the difference between the estimate and the truth $$RB=E(Q)-Q$$, which may also be expressed as a percent $$100*\frac{RB=E(Q)-Q}{Q}$$

• Coverage Rate: which is simply the proportion of times the intervals actually contain the true parameter value

• Average Width: the width of the interval estimates, with longer widths indicating less information

• Root mean squared error (RMSE) $$\operatorname{RMSE}=\sqrt{\frac{1}{n_{\text {mis }}} \sum_{i=1}^{n_{\text {mits }}}\left(y_{i}^{\text {mis }}-\dot{y}_{i}\right)^{2}}$$

The authors primarily based their conclusions on the root mean squared error (RMSE). However, this is problematic for a number of reasons; imputation is not the same as prediction, and so the same loss functions are not as useful; indeed, the RMSE is highly misleading as a measure of performance.

van Buuren explains here:

It is well known that the minimum RMSE is attained by predicting the missing $$\dot{{y}}_{i}$$ by the linear model with the regression weights set to their least squares estimates. According to this reasoning the “best” method replaces each missing value by its most likely value under the model. However, this will find the same values over and over, and is single imputation.

This method ignores the inherent uncertainty of the missing values (and acts as if they were known after all), resulting in biased estimates and invalid statistical inferences. Hence, the method yielding the lowest RMSE is bad for imputation. More generally, measures based on similarity between the true and imputed values do not separate valid from invalid imputation methods.

Having set up our simulation design, we can now test the performances between regression imputation norm.predict and and regular regression imputation norm and we will be examining the metrics above to evaluate performance. Regression imputation generally gives very biased results given that it uses its predictions to impute the missing values. Thus, I will show here how RMSE is highly misleading as a metric to evaluate the performance of imputation algorithms.

Our simulation above will generate a dataset, delete values from the dataset and then impute the missing values for that dataset using regression imputation and stochastic regression imputation. We will use two metrics to evaluate the performance of these two algorithms, RMSE and MAE. Once again, it is worth keeping in mind that regression imputation generally performs very poorly when it comes to imputation and that stochastic regression imputation performs better.

What this simulation clearly shows is that despite regression imputation norm.predict giving highly biased results as usual relative to the truth, it still has a much lower RMSE than stochastic regression imputation, which both had a higher RMSE and a closer distance to the correct parameter values. Once again, this is not surprising given that imputation is not the same thing as prediction and therefore the loss functions cannot be used synonymously. However, when it came to raw bias and mean absolute error, stochastic regression imputation outperformed regression imputation which had a much larger mean absolute error and raw bias.

This is why the most informative metrics to focus on when evaluating an imputation algorithm include information/width, sufficient long-run coverage, and less misleading metrics such as mean absolute error and mean absolute percentage error.

The authors specifically used the missForest package. I tried using the package, but encountered too many difficulties so I simply specified the rf method in mice, and worked from there.

# MI Using Random Forest  -------------------------------------

form <- list(PostDBP ~ factor(Group)  + BaselineDBP + Age + Z + R1 + R2)

rf <- parlmice(BP_miss, method = "rf", ntree = 20,
predictorMatrix = pred,
m = 10, maxit = 5, cluster.seed = 1031,
cores = cores, formulas = form, ridge = 0,
remove.collinear = TRUE, remove.constant = TRUE,
allow.na = TRUE, printFlag = FALSE,
visitSequence = "monotone")

rf1 <- with(rf, rlm(PostDBP ~ factor(Group)  +
BaselineDBP + Age +
Z + R1 + R2))

results5 <- mice::pool(rf1)

results5_sum <- summary(results5)

colnames(results5_sum) <- c("Term", "Estimate", "SE",
"Statistic", "df", "P-val")
ztable(results5_sum)
Term Estimate SE Statistic df P-val
(Intercept) 27.17 7.61 3.57 60.75 0.00
factor(Group)1 9.32 0.20 45.64 36.32 0.00
BaselineDBP 0.79 0.07 11.98 57.07 0.00
Age 0.29 0.01 19.24 35.73 0.00
Z 0.91 2.02 0.45 72.06 0.65
R1 0.04 0.10 0.40 63.89 0.69
R2 -0.03 0.18 -0.15 61.81 0.88

## Full Analysis Set

After we run all those analyses, we can finally check our numbers from the full dataset with no missing observations to see how our approaches to handle the missing data have performed. We first look at the fully adjusted model fit with rlm().

# Full Analysis Set  -------------------------------------

final_full_mod <- rlm(PostDBP ~ factor(Group)  + BaselineDBP +
Age + Z + R1 + R2, data = BP_full,
method = "MM", psi = psi.huber)

ztable(summary(final_full_mod)[["coefficients"]])
Value Std. Error t value
(Intercept) -0.48 3.72 -0.13
factor(Group)1 9.66 0.09 106.85
BaselineDBP 1.00 0.03 31.68
Age 0.33 0.01 50.17
Z -2.02 1.01 -2.00
R1 0.01 0.05 0.21
R2 -0.01 0.09 -0.09
ztable(check_collinearity(final_full_mod))
Term VIF SE_factor
factor(Group) 1.01 1.00
BaselineDBP 1.97 1.40
Age 1.02 1.01
Z 1.98 1.41
R1 1.01 1.01
R2 1.01 1.00

We can also graphically inspect our assumptions.

Then we look at the unadjusted model, where the difference is the group that the participants were randomized to.

# Unadjusted Model  -------------------------------------

final_reduced_mod <- rlm(PostDBP ~ factor(Group) , data = BP_full,
method = "MM", psi = psi.huber)

final_reduced_mod_sum <- summary(final_reduced_mod)[["coefficients"]]

colnames(final_reduced_mod_sum) <- c("Value",
"Std. Error",
"t-value")

ztable(final_reduced_mod_sum)
Value Std. Error t-value
(Intercept) 133.36 0.20 660.27
factor(Group)1 9.20 0.28 32.73

We can also compare the two to see if there are any substantial differences.

# Compare Models -------------------------------------

ztable(compare_performance(final_full_mod, final_reduced_mod))
Name Model AIC AIC_wt BIC BIC_wt RMSE Sigma
final_full_mod rlm 1419.76 1 1453.47 1 0.98 0.99
final_reduced_mod rlm 2556.47 0 2569.12 0 3.10 3.11

While the results have mostly been consistent, I also wanted to run a few additional analyses of my own to see whether these effect estimates were consistent I first started by constructing the confidence distribution using the concurve package to see the range of parameter values consistent with the data given the background model assumptions. (26-27)

# Compatibility of Values -------------------------------------

fit1 <- lm(PostDBP ~ Group + BaselineDBP +
Age + Z + R1 + R2,
data = BP_full)

curve1 <- curve_gen(fit1, var = "Group1", method = "lm",
log = FALSE, steps = 1000, table = TRUE)
Lower Limit Upper Limit Interval Width Interval Level (%) CDF P-value S-value (bits)
9.617 9.674 0.057 25.0 0.625 0.750 0.415
9.585 9.706 0.120 50.0 0.750 0.500 1.000
9.543 9.748 0.205 75.0 0.875 0.250 2.000
9.531 9.760 0.228 80.0 0.900 0.200 2.322
9.517 9.774 0.256 85.0 0.925 0.150 2.737
9.499 9.792 0.293 90.0 0.950 0.100 3.322
9.471 9.820 0.349 95.0 0.975 0.050 4.322
9.446 9.845 0.399 97.5 0.988 0.025 5.322
9.416 9.875 0.459 99.0 0.995 0.010 6.644
Table of Statistics for Various Interval Estimate Percentiles

Indeed, our confidence distribution ends up being very similar to our Bayesian posterior distribution for the treatment effect. Under certain situations, the maximum likelihood estimate ends up also being equivalent to the median unbiased estimate. In this case, it happens to be very similar to the maximum a posteriori estimate.

$C V_{n}(\theta)=1-2\left|H_{n}(\theta)-0.5\right|=2 \min \left\{H_{n}(\theta), 1-H_{n}(\theta)\right\}$

We can also check to see that our confidence distribution and posterior distribution are consistent with the profile-likelihood function, which is free of nuisance parameters. The advantage of profiling is allowing us to recover more accurate parameter estimates when there are several nuisance parameters or messy data. Below are implementations in both R and Stata, except this time we do not specify errors from the t-distribution.

$$L_{t_{n}}(\theta)=f_{n}\left(F_{n}^{-1}\left(H_{p i v}(\theta)\right)\right)\left|\frac{\partial}{\partial t} \psi\left(t_{n}, \theta\right)\right|=h_{p i v}(\theta)\left|\frac{\partial}{\partial t} \psi(t, \theta)\right| /\left.\left|\frac{\partial}{\partial \theta} \psi(t, \theta)\right|\right|_{t=t_{n}}$$

xx <- glm(PostDBP ~ factor(Group)  + BaselineDBP +
Age + Z + R1 + R2,
data = BP_full, family = gaussian())

summary(xx)
#>
#> Call:
#> glm(formula = PostDBP ~ factor(Group) + BaselineDBP + Age + Z +
#>     R1 + R2, family = gaussian(), data = BP_full)
#>
#> Deviance Residuals:
#>     Min       1Q   Median       3Q      Max
#> -3.3502  -0.6383   0.0480   0.6902   2.3115
#>
#> Coefficients:
#>                 Estimate Std. Error t value Pr(>|t|)
#> (Intercept)    -0.618203   3.659016  -0.169   0.8659
#> factor(Group)1  9.645559   0.089064 108.299   <2e-16 ***
#> BaselineDBP     1.005667   0.031198  32.235   <2e-16 ***
#> Age             0.326955   0.006451  50.686   <2e-16 ***
#> Z              -1.973912   0.995506  -1.983   0.0479 *
#> R1              0.010712   0.047107   0.227   0.8202
#> R2             -0.024930   0.087987  -0.283   0.7770
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for gaussian family taken to be 0.9827463)
#>
#>     Null deviance: 15519.32  on 499  degrees of freedom
#> Residual deviance:   484.49  on 493  degrees of freedom
#> AIC: 1419.2
#>
#> Number of Fisher Scoring iterations: 2

We now focus on profiling on the treatment effect and calculating the profile deviance function. We then multiply our deviance by -0.5 to obtain the profile log-likelihood function.

# Profile Likelihood -------------------------------------

prof <- profile(xx)
disp <- attr(prof, "summary")$dispersion mindev <- attr(prof, "original.fit")$deviance

dev1 <- prof[[1]]$tau^2 dev2 <- dev1 * disp + mindev tmpf <- function(x, n) { data.frame(par = n, tau = x$tau,
deviance = x$tau^2 * disp + mindev, x$par.vals, check.names = FALSE)
}

pp <- do.call(rbind, mapply(tmpf, prof, names(prof),
SIMPLIFY = FALSE))

pp2 <- melt(pp, id.var = 1:3)
pp3 <- subset(pp2, par == variable, select = -variable)
pp4 <- pp3[-c(1:13), ]

pp4$loglik <- pp4$deviance * -0.5

clear
#> . cl. import delimited "~/Dropbox/LessLikely/static/datasets/temp.csv", numericcols(2 3 5 6 7 8)
#> (encoding automatically selected: ISO-8859-2)
#> (8 vars, 500 obs)
#>
#> . foreach var of varlist  group baselinedbp age z r1 r2 {
#>   2. set scheme s2color
#>   3. quietly pllf glm postdbp group baselinedbp age z r1 r2, profile(var')
#>   4. graph copy Graph var'
#>   5. }
#>
#> . graph combine  group baselinedbp age z r1 r2
#>
#> . graph export "loglikfunction.svg", replace
#> file loglikfunction.svg saved as SVG format

Lower Limit Upper Limit CI Width Level (%) CDF P-value S-value
9.617 9.674 0.057 25.0 0.625 0.750 0.415
9.585 9.706 0.120 50.0 0.750 0.500 1.000
9.543 9.748 0.205 75.0 0.875 0.250 2.000
9.531 9.760 0.228 80.0 0.900 0.200 2.322
9.517 9.774 0.256 85.0 0.925 0.150 2.737
9.499 9.792 0.293 90.0 0.950 0.100 3.322
9.471 9.820 0.349 95.0 0.975 0.050 4.322
9.446 9.845 0.399 97.5 0.988 0.025 5.322
9.416 9.875 0.459 99.0 0.995 0.010 6.644
Table of Statistics for Various Interval Estimate Percentiles

I then compared the results of the full analysis set to the results from the primary analysis and sensitivity analyses.

## Examining the Analyses

Table 1: Evaluation of Imputation Algorithms
PMM Heckman Stata Bayes GAMLSS RI RF True
(Intercept) 1.48 -0.46 6.84 -7.21 3.71 2.30 27.17 -0.48
factor(Group)1 9.52 9.55 10.44 9.72 9.52 9.56 9.32 9.66
BaselineDBP 0.99 1.00 0.95 1.05 0.97 0.98 0.79 1.00
Age 0.33 0.33 0.32 0.35 0.32 0.34 0.29 0.33
Z -1.27 -1.71 -2.64 -1.94 -0.62 -0.74 0.91 -2.02
R1 -0.02 -0.01 -0.36 -0.16 -0.02 -0.02 0.04 0.01
R2 -0.13 -0.14 -0.17 -0.19 -0.14 -0.14 -0.03 -0.01
MPE -120.00 -199.82 427.76 -300.40 -62.86 -117.59 777.91 0.00
Huber Loss 0.25 0.01 1.06 0.90 0.66 0.44 4.24 0.00
MAE 0.43 0.09 1.33 1.04 0.85 0.62 4.46 0.00
MSD -0.34 -0.01 -0.98 0.98 -0.75 -0.54 -4.28 0.00
Metric Abbreviations:
MPE = Mean Percentage Error; MAE = Mean Absolute Error; MSD = Mean Signed Deviation
MI Abbreviations:
PMM = Predictive Mean Matching; Heckman = Heckman Selection Model; RI = Random Indicator; RF = Random Forest; Bayes = Imputation During Model Fitting

We can see that that the overall treatment effect is approximately 9.7-9.9, and for most of the principled methods we used to handle the missing data, we did not obtain similar estimates. For example, both the primary analysis and the controlled multiple imputation methods did not yield the correct treatment effect estimate in the adjusted model.

In fact, nearly all of the methods used except for the Heckman selection model, yielded overestimates of the treatment effect. Even though such analyses can be substantially improved with more specification and more information, the reality is that data that are assumed to be MNAR are difficult to handle and that no method can guarantee the correct answer. Indeed, we knew the missing data model / mechanism, via our custom function

# Missing Data Mechanism -------------------------------------

lesslikely <- function() {
set.seed(1031, kind = "L'Ecuyer-CMRG",
normal.kind = "Inversion",
sample.kind = "Rejection")
logistic <- function(x) exp(x) / (1 + exp(x))
1 - rbinom(n, 1, logistic(errors[, 2]))
}

Ry <- ifelse(lesslikely() > 0, 1, 0)
PostDBP[Ry == 0] <- NA

$$$\Pr(R_1=0)=\psi_0+\frac{e^{Y_1}}{1+e^{Y_1}}\psi_1$$$

for MNAR we set $$\psi_\mathrm{MNAR}=(0,1)$$

Thus, we obtain the following model:

\begin{align} \mathrm{MNAR}&:\mathrm{logit}(\Pr(R_1=0)) = Y_1 \ \end{align}

Even the random indicator method, which has shown much promise in simulation studies (24) and offers the advantage of not having to specify a pattern-mixture or selection model could not recover the correct treatment effect estimate either, showing the difficulty of handling missing data and the limitations of automated approaches.

I hope it is clear from the examples above that it is quite easy to do any analyses with missing data and use a variety of ad-hoc approaches or even principled approaches to explore whether or not the results you’ve obtained are trustworthy or at least even give that impression, but doing a careful and principled one is quite difficult and requires much thought that will take into account real departures from assumptions and whether or not those will lead to substantial changes in results and conclusions/decisions.

# Acknowledgements

I’d like to thank Dr. Tim Morris and Dr. Andrew Althouse for helpful comments on an earlier version of this piece. My acknowledgment of their help does not imply endorsement of my views by these colleagues or vice versa, and I remain solely responsible for the views expressed herein. All errors are my own. To point out such errors, please leave a comment below or use the contact form to get in touch.

# Statistical Software

R Environment

si <- sessionInfo()
print(si, RNG = TRUE, locale = TRUE)
#> R version 4.2.0 (2022-04-22)
#> Platform: aarch64-apple-darwin20 (64-bit)
#> Running under: macOS Monterey 12.5
#>
#> Matrix products: default
#> BLAS:   /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRblas.0.dylib
#> LAPACK: /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRlapack.dylib
#>
#> Random number generation:
#>  RNG:     L'Ecuyer-CMRG
#>  Normal:  Inversion
#>  Sample:  Rejection
#>
#> locale:
#> [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
#>
#> attached base packages:
#>  [1] splines   grid      stats4    parallel  stats     graphics  grDevices utils     datasets  methods   base
#>
#> other attached packages:
#>  [1] texPreview_2.0.0        tinytex_0.39            rmarkdown_2.14          brms_2.17.0             bootImpute_1.2.0
#>  [6] miceMNAR_1.0.2          knitr_1.39              boot_1.3-28             reshape2_1.4.4          ProfileLikelihood_1.2
#> [11] ImputeRobust_1.3-1      gamlss_5.4-3            gamlss.dist_6.0-3       gamlss.data_6.0-2       mvtnorm_1.1-3
#> [16] performance_0.9.0       summarytools_1.0.1      tidybayes_3.0.2         htmltools_0.5.2         Statamarkdown_0.7.1
#> [21] car_3.0-13              carData_3.0-5           qqplotr_0.0.5           ggcorrplot_0.1.3        mitml_0.4-3
#> [26] pbmcapply_1.5.1         Amelia_1.8.0            Rcpp_1.0.8.3            blogdown_1.10           doParallel_1.0.17
#> [31] iterators_1.0.14        foreach_1.5.2           bayesplot_1.9.0         wesanderson_0.3.6       VIM_6.1.1
#> [36] colorspace_2.0-3        here_1.0.1              progress_1.2.2          loo_2.5.1               mi_1.1
#> [41] Matrix_1.4-1            broom_0.8.0             yardstick_1.0.0         svglite_2.1.0           Cairo_1.5-15
#> [46] cowplot_1.1.1           mgcv_1.8-40             nlme_3.1-157            xfun_0.31               broom.mixed_0.2.9.4
#> [51] reticulate_1.25         kableExtra_1.3.4        posterior_1.2.2         checkmate_2.1.0         parallelly_1.32.0
#> [56] miceFast_0.8.1          ggmice_0.0.1            randomForest_4.7-1.1    missForest_1.5          miceadds_3.13-12
#> [61] mice.mcerror_0.0.0-9000 quantreg_5.93           MCMCpack_1.6-3          MASS_7.3-57             coda_0.19-4
#> [66] latex2exp_0.9.4         rstan_2.21.5            StanHeaders_2.21.0-7    cmdstanr_0.5.2          forcats_0.5.1
#> [71] stringr_1.4.0           dplyr_1.0.9             purrr_0.3.4             readr_2.1.2             tibble_3.1.7
#> [76] tidyverse_1.3.1         ggtext_0.1.1            concurve_2.7.7          showtext_0.9-5          showtextdb_3.0
#> [81] sysfonts_0.8.8          future.apply_1.9.0      future_1.26.1           tidyr_1.2.0             magrittr_2.0.3
#> [86] mice_3.14.0             rms_6.3-0               SparseM_1.81            Hmisc_4.7-0             ggplot2_3.3.6
#> [91] Formula_1.2-4           survival_3.3-1          lattice_0.20-45
#>
#> loaded via a namespace (and not attached):
#>   [1] mitools_2.4            pander_0.6.5           haven_2.5.0            tcltk_4.2.0            vctrs_0.4.1
#>   [6] usethis_2.1.6          gmp_0.6-5              later_1.3.0            nloptr_2.0.3           DBI_1.1.2
#>  [11] distributional_0.3.0   jpeg_0.1-9             MatrixModels_0.5-0     pspline_1.0-19         htmlwidgets_1.5.4
#>  [16] miscTools_0.6-26       inline_0.3.19          laeken_0.5.2           markdown_1.1           DEoptimR_1.0-11
#>  [21] DT_0.23                promises_1.2.0.1       pkgload_1.2.4          magick_2.7.3           RcppParallel_5.1.5
#>  [26] startupmsg_0.9.6       fs_1.5.2               brio_1.1.3             mnormt_2.1.0           ranger_0.13.1
#>  [31] digest_0.6.29          png_0.1-7              polspline_1.1.20       bookdown_0.26          pkgconfig_2.0.3
#>  [36] dygraphs_1.1.1.6       estimability_1.3       minqa_1.2.4            bslib_0.3.1            zoo_1.8-10
#>  [41] tidyselect_1.1.2       pcaPP_2.0-1            gamm4_0.2-6            viridisLite_0.4.0      pkgbuild_1.3.1
#>  [46] rlang_1.0.2            jquerylib_0.1.4        Rmpfr_0.8-9            glue_1.6.2             gdtools_0.2.4
#>  [51] RColorBrewer_1.1-3     pryr_0.1.5             distrEx_2.8.0          modelr_0.1.8           emmeans_1.7.4-1
#>  [56] matrixStats_0.62.0     ggsignif_0.6.3         threejs_0.3.3          labeling_0.4.2         httpuv_1.6.5
#>  [61] class_7.3-20           TH.data_1.1-1          clipr_0.8.0            shinystan_2.6.0        webshot_0.5.3
#>  [66] jsonlite_1.8.0         mime_0.12              systemfonts_1.0.4      gridExtra_2.3          easypackages_0.1.0
#>  [71] stringi_1.7.6          insight_0.17.1         processx_3.6.0         distr_2.8.0            gsl_2.1-7.1
#>  [76] survey_4.1-1           ggdist_3.1.1           cli_3.3.0              data.table_1.14.2      survminer_0.4.9
#>  [81] officer_0.4.2          rstudioapi_0.13        sfsmisc_1.1-13         listenv_0.8.0          miniUI_0.1.1.1
#>  [86] survMisc_0.5.6         pan_1.6                dbplyr_2.2.0           sessioninfo_1.2.2      readxl_1.4.0
#>  [91] lifecycle_1.0.1        munsell_0.5.0          cellranger_1.1.0       visNetwork_2.1.0       codetools_0.2-18
#>  [96] systemfit_1.1-24       magic_1.6-0            lmtest_0.9-40          sys_3.4                htmlTable_2.4.0
#> [101] itertools_0.1-3        rstantools_2.2.0       gWidgets2_1.0-9        xtable_1.8-4           scam_1.2-12
#> [106] abind_1.4-5            farver_2.1.0           km.ci_0.5-6            rapportools_1.1        credentials_1.3.2
#> [111] askpass_1.1            svUnit_1.0.6           rcartocolor_2.0.0      shinythemes_1.2.0      cluster_2.1.3
#> [116] ellipsis_0.3.2         prettyunits_1.1.1      lubridate_1.8.0        ggridges_0.5.3         reprex_2.0.1
#> [121] details_0.3.0          flextable_0.7.1        igraph_1.3.1           ismev_1.42             shinyjs_2.1.0
#> [126] remotes_2.4.2          testthat_3.1.4         yaml_2.3.5             utf8_1.2.2             svgPanZoom_0.3.4
#> [131] jomo_2.7-3             e1071_1.7-11           ggpubr_0.4.0           foreign_0.8-82         withr_2.5.0
#> [136] VineCopula_2.4.4       rngtools_1.5.2         doRNG_1.8.2            trust_0.1-8            multcomp_1.4-19
#> [141] robustbase_0.95-0      gWidgets2tcltk_1.0-8   devtools_2.4.3         memoise_2.0.1          evaluate_0.15
#> [146] VGAM_1.1-6             tzdb_0.3.0             callr_3.7.0            ps_1.7.0               DiagrammeR_1.0.9
#> [151] metafor_3.4-0          evd_2.3-6              fansi_1.0.3            highr_0.9              xts_0.12.1
#> [156] furrr_0.3.0            cachem_1.0.6           desc_1.4.1             bcaboot_0.2-3          maxLik_1.5-2
#> [161] tensorA_0.36.2         rstatix_0.7.0          GJRM_0.2-6             rprojroot_2.0.3        tools_4.2.0
#> [166] stabledist_0.7-1       sass_0.4.1             sandwich_3.0-1         proxy_0.4-27           pbivnorm_0.6.0
#> [171] xml2_1.3.3             httr_1.4.3             assertthat_0.2.1       globals_0.15.0         metadat_1.2-0
#> [176] R6_2.5.1               nnet_7.3-17            Brobdingnag_1.2-7      gtools_3.9.2.1         sampleSelection_1.2-12
#> [181] rematch2_2.1.2         projpred_2.1.2         generics_0.1.2         base64enc_0.1-3        gridtext_0.1.4
#> [186] pillar_1.7.0           sp_1.5-0               uuid_1.1-0             plyr_1.8.7             gtable_0.3.0
#> [191] rvest_1.0.2            zip_2.2.0              psych_2.2.5            colourpicker_1.1.1     latticeExtra_0.6-29
#> [196] fastmap_1.1.0          crosstalk_1.2.0        ADGofTest_0.3          copula_1.0-1           extremevalues_2.3.3
#> [201] vcd_1.4-10             arrayhelpers_1.1-0     openssl_2.0.2          scales_1.2.0           arm_1.12-2
#> [206] backports_1.4.1        lme4_1.1-29            mcmc_0.9-7             hms_1.1.1              shiny_1.7.1
#> [211] KMsurv_0.1-5           numDeriv_2016.8-1.1    mathjaxr_1.6-0         whisker_0.4            crayon_1.5.1
#> [216] bridgesampling_1.1-2   rpart_4.1.16           compiler_4.2.0

Stata Environment

about
version
#> . abStata/BE 17.0 for Mac (Apple Silicon)
#> Revision 01 Jun 2022
#>
#> Total physical memory: 16.00 GB
#>
#> Serial number: 301706317553
#>                Atlantis
#>
#> . version
#> version 17.0

# References

1. Stark PB, & Saltelli A. (2018). ‘Cargo-cult statistics and scientific crisis’. Significance. 15:40–43. doi: 10.1111/j.1740-9713.2018.01174.x.
2. Gigerenzer G. (2018). ‘Statistical Rituals: The Replication Delusion and How We Got There. Advances in Methods and Practices in Psychological Science. 1:198–218. doi: 10.1177/2515245918771329.
3. Greenland S. (2017). ‘Invited commentary: The need for cognitive science in methodology’. American Journal of Epidemiology. 186:639–645. doi: 10.1093/aje/kwx259.
4. Greenland S. (2005). ‘Multiple-bias modelling for analysis of observational data’. Journal of the Royal Statistical Society Series A (Statistics in Society). 168:267–306. doi: 10.1111/j.1467-985X.2004.00349.x.
5. Greenland S, & Lash TL. (2008). ‘Bias analysis’. In: Rothman KJ, Greenland S, & Lash TL, editors. Modern Epidemiology. 3rd edition. Lippincott Williams & Wilkins. p. 345–380
6. Lash TL, Fox MP, & Fink AK. (2009). ‘Applying Quantitative Bias Analysis to Epidemiologic Data. Springer New York
7. Lash TL, Fox MP, MacLehose RF, Maldonado G, McCandless LC, & Greenland S. (2014). ‘Good practices for quantitative bias analysis’. International Journal of Epidemiology. 43:1969–1985. doi: 10.1093/ije/dyu149.
8. Lash TL, Ahern TP, Collin LJ, Fox MP, & MacLehose RF. (2021). ‘Bias Analysis Gone Bad. American Journal of Epidemiology. doi: 10.1093/aje/kwab072.
9. Gustafson P. (2021). ‘Invited Commentary: Toward Better Bias Analysis. American Journal of Epidemiology. doi: 10.1093/aje/kwab068.
10. Greenland S. (2021). ‘Dealing with the Inevitable Deficiencies of Bias Analysis and All Analyses. American Journal of Epidemiology. doi: 10.1093/aje/kwab069.
11. Molenberghs G, Fitzmaurice G, Kenward MG, Tsiatis A, & Verbeke G. (2014). ‘Handbook of Missing Data Methodology. CRC Press
12. Scott RaP. (2002). ‘The Multicentre Aneurysm Screening Study (MASS) into the effect of abdominal aortic aneurysm screening on mortality in men: A randomised controlled trial’. The Lancet. 360:1531–1539. doi: 10.1016/S0140-6736(02)11522-4.
13. Morris TP, Kahan BC, & White IR. (2014). ‘Choosing sensitivity analyses for randomised trials: principles’. BMC Medical Research Methodology. 14:11. doi: 10.1186/1471-2288-14-11.
14. Mosteller F, & Tukey JW. (1987). ‘Data analysis including statistics’. In: The Collected Works of John W. Tukey: Philosophy and Principles of Data Analysis 1965-1986. CRC Press
15. Akacha M, Bretz F, Ohlssen D, Rosenkranz G, & Schmidli H. (2017). ‘Estimands and Their Role in Clinical Trials. Statistics in Biopharmaceutical Research. 9:268–271. doi: 10.1080/19466315.2017.1302358.
16. Permutt T. (2020). ‘Do Covariates Change the Estimand?’ Statistics in Biopharmaceutical Research. 12:45–53. doi: 10.1080/19466315.2019.1647874.
17. Mitroiu M, Oude Rengerink K, Teerenstra S, Pétavy F, & Roes KCB. (2020). ‘A narrative review of estimands in drug development and regulatory evaluation: Old wine in new barrels?’ Trials. 21:671. doi: 10.1186/s13063-020-04546-1.
18. Little RJ, D’Agostino R, Cohen ML, Dickersin K, Emerson SS, Farrar JT, et al. (2012). ‘The Prevention and Treatment of Missing Data in Clinical Trials. New England Journal of Medicine. 367:1355–1360. doi: 10.1056/NEJMsr1203730.
19. Chen D-G(Din), Peace KE, & Zhang P. (2017). ‘Clinical trial data analysis using r and SAS’. 2nd edition. Chapman; Hall/CRC. doi: 10.1201/9781315155104.
20. Buuren S van. (2018). ‘Flexible Imputation of Missing Data, Second Edition. CRC Press
21. Meng X-L, & Rubin DB. (1992). ‘Performing Likelihood Ratio Tests with Multiply-Imputed Data Sets. Biometrika. 79:103–111. doi: 10.2307/2337151.
22. Schenker N, & Taylor J. (1996). ‘Partially parametric techniques for multiple imputation’. Computational Statistics & Data Analysis. 22:425–446. doi: 10.1016/0167-9473(95)00057-7.
23. Permutt T. (2016). ‘Sensitivity analysis for missing data in regulatory submissions’. Statistics in Medicine. 35:2876–2879. doi: 10.1002/sim.6753.
24. Jolani S. (2012). ‘Dual Imputation Strategies for Analyzing Incomplete Data
25. von Hippel PT, & Bartlett J. (2019). ‘Maximum likelihood multiple imputation: Faster imputations and consistent standard errors without posterior draws’. arXiv:12100870 [stat]. https://arxiv.org/abs/1210.0870
26. Rafi Z, & Greenland S. (2020). ‘Semantic and cognitive tools to aid statistical science: Replace confidence and significance by compatibility and surprise’. BMC Medical Research Methodology. 20:244. doi: 10.1186/s12874-020-01105-9.
27. Greenland S, & Rafi Z. (2020). ‘To Aid Scientific Inference, Emphasize Unconditional Descriptions of Statistics. arXiv:190908583 [statME]. https://arxiv.org/abs/1909.08583

Help support the website!