Earlier this year, my colleagues and I were discussing the relationship between saturated fat and cardiovascular disease and one of us was writing an article on a very unusual trial often included in meta-analyses of the topic.
That trial is the Finnish Mental Hospital Study,1 a crossover study that compared patients on a control diet with a certain amount of saturated fat to patients on an intervention diet that replaced the saturated fat with polyunsaturated fats.
Here is a summary of the trial,
"A controlled intervention trial, with the purpose of testing the hypothesis that the incidence of coronary heart disease (CHD) could be decreased by the use of serum-cholesterol-lowering (SCL) diet, was carried out in 2 mental hospitals near Helsinki in 1959–71.
The subjects were hospitalized middle-aged men. One of the hospitals received the SCL diet, i.e. a diet low in saturated fats and cholesterol and relatively high in polyunsaturated fats, while the other served as the control with a normal hospital diet. Six years later the diets were reversed, and the trial was continued another 6 years."
The study didn’t just include men, it also included women and is discussed in a separate paper by the same research group.
In total, the “two studies” (really just one study) had a sample size of 818 participants (for hard CVD events), so they often weigh quite a bit in meta-analyses.
I’d like to bring attention to one particular meta-analysis published eight years ago by Mozaffarian, Micha, & Wallace, 2010. It’s one of the most cited meta-analyses on this topic, with Google Scholar indicating that it’s been cited by over 900+ academic sources. Web of Science indicates that it’s been cited by 466 papers at the time of writing this post.
Source: Web of Science
Clearly, it’s a well known study.
The meta-analysis of interest describes its inclusion and exclusion criteria as,
“We searched for all RCTs that randomized adults to increased total or n-6 PUFA consumption for at least 1 year without other major concomitant interventions (e.g., blood pressure or smoking control, other multiple dietary interventions, etc.), had an appropriate control group without this dietary intervention, and reported (or had obtainable from the authors) sufficient data to calculate risk estimates with standard errors for effects on occurrence of”hard" CHD events (myocardial infarction, CHD death, and/or sudden death). Studies were excluded if they were observational or otherwise nonrandomized."
So the authors state that the included studies must be randomized trials that are at least a year long and that they are excluding studies that are non-randomized or observational.
Here’s a list of the studies they included. Note the design of the Finnish studies (Turpeinen, 1979 & Miettinen, 1983), which I’ll touch upon below.
What Were the Results?
Mozaffarian D, Micha R, Wallace S (2010)
“Combining all trials, the pooled risk reduction for CHD events was 19% (RR = 0.81, 95% CI 0.70–0.95, p = 0.008)”
The 2010 meta-analysis found that replacing saturated fats in the diet with polyunsaturated fats had a notable, statistically significant reduction on CHD events. A 19% reduction is certainly, nothing to ignore and the confidence interval (CI) leans towards an effect. It seems promising as a dietary intervention. That could be a reason why the study is cited so widely. The quality of the trials included in the meta-analysis was low to moderate,
“Many of the trials had design limitations, such as single-blinding, inclusion of electrocardiographically defined clinical endpoints, or open enrollment. All trials utilized blinded endpoint assessment. Quality scores were in the modest range and relatively homogeneous: all trials had quality scores of either 2 or 3.”
And there was some suggestion of publication bias (could also be small-study effects),
Mozaffarian D, Micha R, Wallace S (2010)
“Visual inspection of the resulting funnel plot indicated some potential for publication bias (Figure S1), with a borderline Begg’s test (continuity corrected p = 0.07), although such determinations are limited when the number of studies is relatively small.”
Regardless, the effects are quite interesting and worth exploring further.
What Went Wrong?
A major problem in this meta-analysis is that the two Finnish studies included in the quantitative analysis were not randomized. The authors made it clear with their inclusion criteria that they only wanted to include trials that were randomized.
The two Finnish Mental Hospital studies were labeled as “cluster randomized”, which you can see in the table of characteristics from above. When this meta-analysis was published, several individuals were critical that a “cluster-randomized trial” was being labeled as a randomized trial, especially when there were only two clusters (two hospitals). This is a valid criticism because a cluster-randomized trial with only one cluster per condition is invalid for any between-group statistical comparisons. Brown et al., 2015 explain in this comprehensive article,
A particularly pernicious and invalid design that requires recognition is the inclusion of only one cluster per condition… Such designs are unable to support any valid analysis for an intervention effect, absent strong and untestable assumptions (11, 12). In such designs, the variation that is due to the cluster is not identifiable apart from the variation due to the condition.
A one-cluster-per-condition design is analogous to assigning one person to the treatment and one person to the control in an ordinary (nonclustered) RCT, measuring each person’s outcome multiple times, treating the multiple observations per person like independent observations, and interpreting the results like a valid RCT. In such a situation, the observations on person A can be tested as to whether they are significantly different from those on person B but cannot support an inference about the effect of treatment per se.
So it is clear that a one-cluster-per-condition design is not valid to ascertain much about the intervention. However, many individuals (if not all) failed to notice that the Finnish Mental Hospital studies were not even cluster randomized! There is no indication in any of the five published papers from these two studies that there is any randomization. You can check all five papers here:
|Published Journal||Year||Publication Title|
|International Journal of Epidemiology||1983||Dietary Prevention of Coronary Heart Disease in Women: The Finnish Mental Hospital Study|
|Circulation||1979||Effect of Cholesterol-Lowering Diet on Mortality from Coronary Heart Disease and Other Causes|
|American Journal of Clinical Nutrition||1968||Dietary Prevention of Coronary Heart Disease: Long-Term Experiment: I. Observations on Male Subjects|
|International Journal of Epidemiology||1979||Dietary Prevention of Coronary Heart Disease: The Finnish Mental Hospital Study|
|The Lancet||1972||Effect of Cholesterol-Lowering Diet on Mortality from Coronary Heart-Disease and Other Causes a Twelve-Year Clinical Trial in Men and Women|
Furthermore, cluster-randomized trials were not common when these studies were being conducted, which is why we should be skeptical of these being cluster-randomized trials.
Yet, these two studies were mistakenly labeled as being “cluster randomized” and therefore were included in the meta-analysis. Both of these studies contributed a total weight of 16% to the analysis.
And again, the authors found a pretty notable reduction in CVD events (RR: 0.81, 95% CI: 0.70, 0.95, p = 0.008)
Correcting the Error
So what happens to the results when you correct this mistake by removing the two studies?
R and find out. If you’d like to reproduce the analysis on
your own, you can find all the code at the bottom of this blog post.
As you can see above, rerunning the analysis after removing the Finnish studies results in the effect size shrinking from a 19% reduction to a 13% reduction (RR: 0.87, 95% CI: 0.76, 1.00). That’s a large difference!
If we’re concerned about statistical significance, the results are no longer significant. It’s worth noting that the upper bound of the confidence interval barely contains the null value (1) and the lower bound includes a value as low as 0.76. It’s clear that the CI still seems to lean towards an effect.
Regardless of your statistical philosophy, this was a noteworthy, objective mistake. It was a mistake in labeling two studies as meeting the inclusion criteria, and correcting for this mistake leads to a substantial change in the results. Yet, this error has not been corrected for in the journal. In fact, this study has been around for eight years with no corrections or retractions.
I reached out to both the authors and the editors of PLOS, but to date, there are no updates or corrections on the article itself. Therefore, I suspect several people who read the article or cite it, are not aware that the summary effects are incorrect and that some of the studies in the analysis should not be there!
It is very important to note that correcting for the errors in this study does not lead to completely different conclusions. Although the effect is no longer statistically significant, it is still there based on the effect size and coverage of the confidence intervals. However, the effect is reduced.
Systematic reviews by other groups including Cochrane did not include the Finnish studies in their meta-analyses because the authors didn’t believe that a “cluster randomized trial” with so few clusters (2) met the inclusion criteria for a randomized trial (also worth remembering, that there is no indication in any of the papers that this was even cluster randomized!). Some of these systematic reviews that exclude the Finnish studies still find a benefit to replacing saturated fats in the diet with polyunsaturated fats.
However, other meta-analyses have also found no statistically significant benefit to replacing saturated fats with polyunsaturated fats.
Clearly, there is quite a bit of disagreement on this topic. Regardless, the meta-analysis in question still made a large error and it is a problem for the following reasons (even if the overall conclusions of it were not to change after the correction):
- Two prominent studies were misclassified
- The studies did not meet the inclusion criteria but were included
- Inclusion in the analysis leads to a substantially different effect size than without inclusion
- The meta-analysis is constantly cited and misleading other readers and future researchers
I’m certainly not suggesting that errors do not happen, especially when undertaking such large, comprehensive projects. In fact, I would probably be suspicious if there were never any errors when such large projects were conducted!
However, I believe that when such errors are pointed out, they should be corrected as quickly and transparently as possible. Hopefully, the authors and the editors address this issue soon to prevent any further confusion.
likelyPUFA <- "https://raw.githubusercontent.com/zadrafi/data.lesslikely/master/static/uploads/PUFA.csv?token=AJLO7AH4WUNBI6T3OK4KHDDAGTD64" pufaMETA <- read.csv(likelyPUFA, header = TRUE) data.frame(pufaMETA) #> Study_ID PUFA_Events PUFA_Total Control_Events Control_Total #> 1 DARTS 132 1018 144 1015 #> 2 LA Veterans 53 424 71 422 #> 3 Minnesota CS 131 4541 121 4516 #> 4 MRC Soy 45 199 51 194 #> 5 Oslo Diet Heart 61 206 81 206 #> 6 STARS 2 27 5 28
library("metafor") # Meta-Analysis dat <- escalc(measure = "RR", ai = PUFA_Events, n1i = PUFA_Total, ci = Control_Events, n2i = Control_Total, data = pufaMETA) dat[, c(1, 6:7)] #> Study_ID yi vi #> 1 DARTS -0.0900 0.0126 #> 2 LA Veterans -0.2971 0.0282 #> 3 Minnesota CS 0.0739 0.0155 #> 4 MRC Soy -0.1506 0.0317 #> 5 Oslo Diet Heart -0.2836 0.0190 #> 6 STARS -0.8799 0.6272 res <- rma(yi, vi, data = dat) confint(res) #> #> estimate ci.lb ci.ub #> tau^2 0.0066 0.0000 0.3010 #> tau 0.0812 0.0000 0.5486 #> I^2(%) 21.4317 0.0000 92.5694 #> H^2 1.2728 1.0000 13.4578 par(mar = c(4, 4, 1, 2)) res <- rma(ai = PUFA_Events, n1i = PUFA_Total, ci = Control_Events, n2i = Control_Total, data = dat, measure = "RR", slab = paste(Study_ID, sep = ", "), method = "DL") res_REML <- rma(ai = PUFA_Events, n1i = PUFA_Total, ci = Control_Events, n2i = Control_Total, data = dat, measure = "RR", slab = paste(Study_ID, sep = ", "), method = "REML")
If you would like more details of the analysis, the following function from metafor offers a lengthy explanation
webshot(reporter(res_REML, open = TRUE, footnotes = TRUE), "report.pdf", delay = 0.5)
Although I have used both the Dersimonian-Laird estimator, as used in the original analysis by the authors, I have chosen to also do a sensitivity analyses3 using the restricted maximum likelihood estimator (REML) and plot that instead, as it often gives wider interval estimates, often with better coverage properties.
# Data Visualization - Forest Plot Structure forest(res_REML, xlim = c(-16, 6), at = log(c(0.05, 0.25, 1, 2)), atransf = exp, ilab = cbind(dat$PUFA_Events, dat$PUFA_Total, dat$Control_Events, dat$Control_Total), ilab.xpos = c(-9.5, -8, -6, -4.5), cex = 0.85, ylim = c(-1, 8.6), xlab = "Risk Ratio", mlab = "", psize = 1.7, pch = 16) # Heterogeneity text(-16, -1, pos = 4, cex = 0.85, bquote(paste("Random Effects (Q = ", .(formatC(res_REML$QE, digits = 2, format = "f")), ", df = ", .(res_REML$k - res$p), ", p = ", .(formatC(res_REML$QEp, digits = 2, format = "f")), "; ", I^2, " = ", .(formatC(res_REML$I2, digits = 1, format = "f")), "%)"))) op <- par(cex = 0.85, font = 4) # Bold Font par(font = 2) # Columns text(c(-9.5, -8, -6, -4.5), 7.5, c("Events ", " Total", "Events ", " Total")) text(c(-8.75, -5.25), 8.5, c("PUFA Diet", "Control Diet")) text(-16, 7.5, "Study Name", pos = 4) text(6, 7.5, "Risk Ratio [95% CI]", pos = 2)
res_REML #> #> Random-Effects Model (k = 6; tau^2 estimator: REML) #> #> tau^2 (estimated amount of total heterogeneity): 0.0066 (SE = 0.0183) #> tau (square root of estimated tau^2 value): 0.0812 #> I^2 (total heterogeneity / total variability): 21.43% #> H^2 (total variability / sampling variability): 1.27 #> #> Test for Heterogeneity: #> Q(df = 5) = 5.9549, p-val = 0.3106 #> #> Model Results: #> #> estimate se zval pval ci.lb ci.ub #> -0.1361 0.0720 -1.8911 0.0586 -0.2771 0.0050 . #> #> --- #> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
library("concurve") library("ggplot2") PUFA_curve <- curve_meta(res_REML, measure = "ratio") ggcurve(PUFA_curve[], nullvalue = c(1), levels = c(0.5, 0.75, 0.95), title = "$P$-value Function of PUFA Meta-Analysis", subtitle = "Reanalysis of Mozaffarian et al. 2010", xaxis = "Risk Ratio") + labs(caption = "Restricted Maximum Likelihood Estimator used")
Although I believe the results still suggest an effect, if another
analyst was not convinced by these effects and set interval nulls that
spanned from risk ratios from 0.9 to 1.1, as being equivalent to no
effect, then these results would be in some trouble! We can see this
which allows us to set the interval null regions.
ggcurve(PUFA_curve[], nullvalue = c(0.9, 1.1), levels = c(0.5, 0.75, 0.95), title = "$P$-value Function of PUFA Meta-Analysis", subtitle = "Reanalysis of Mozaffarian et al. 2010", xaxis = "Risk Ratio") + annotate(geom = "text", x = 1.0, y = 0.50, label = "Interval Null Region \n(Assumed No Meaningful Effect)", size = 4, color = "#000000", alpha = 0.5) + labs(caption = "Restricted Maximum Likelihood Estimator used")
The above was the \(P\)-value function for the summary effect, but we can also construct \(P\)-value functions for each of the individual studies in the meta-analysis, and plot them together, and also construct prediction ranges for them. This is known as the drapery plot,6 and we can construct them using the meta package in
library("meta") res_drapery <- metabin(PUFA_Events, PUFA_Total, Control_Events, Control_Total, data = dat, studlab = dat$Study_ID, method = "MH", method.tau = "REML") drapery(res_drapery, xlim = c(0.5, 1.5), main = "Drapery Plot of of PUFA Meta-Analysis Studies", xaxis = "Restricted Maximum Likelihood Estimator", labels = "studlab", study.results = TRUE, col.fixed = "#3f8f9b", col.random = "#d46c5b", col.predict = "lightgray", las =.25, alpha = c(0.001, 0.01, 0.05, 0.1), plot = TRUE, lty.alpha = .2,lwd.alpha = .1, col.alpha = "black", cex.alpha = 0.1, legend = TRUE)
We can also conduct further sensitivity analyses using the
R package using the copas selection method, which fits a joint model to investigate selection bias and an upper bound is also calculated within a separate function for reporting bias.
library("metasens") cop1 <- copas(res_drapery) plot(cop1)
summary(cop1) #> Summary of Copas selection model analysis: #> #> publprob RR 95%-CI pval.treat pval.rsb N.unpubl #> 1.0000 0.8799 [0.7802; 0.9924] 0.0371 0.1701 0 #> 0.7737 0.8972 [0.7686; 1.0473] 0.1692 0.2037 1 #> 0.6672 0.9101 [0.7856; 1.0544] 0.2098 0.1794 1 #> 0.5768 0.9258 [0.7875; 1.0883] 0.3501 0.1869 2 #> 0.5735 0.9321 [0.7929; 1.0957] 0.3940 0.2744 2 #> 0.5090 0.9412 [0.7808; 1.1345] 0.5247 0.1924 3 #> 0.4502 0.9574 [0.7573; 1.2103] 0.7157 0.1534 4 #> #> Copas model (adj) 0.8799 [0.7802; 0.9924] 0.0371 0.1701 0 #> Random effects model 0.8728 [0.7579; 1.0050] 0.0586 #> #> Significance level for test of residual selection bias: 0.1 #> #> Legend: #> publprob - Probability of publishing study with largest standard error #> pval.treat - P-value for hypothesis of overall treatment effect #> pval.rsb - P-value for hypothesis that no selection remains unexplained #> N.unpubl - Approximate number of unpublished studies suggested by model print(summary(limitmeta(res_drapery)), digits = 2) #> Result of limit meta-analysis: #> #> Random effects model RR 95%-CI z pval #> Adjusted estimate 0.99 [0.71; 1.36] -0.08 0.9361 #> Unadjusted estimate 0.87 [0.76; 1.00] -1.89 0.0586 #> #> Quantifying heterogeneity: #> tau^2 = 0.0066; I^2 = 16.0% [0.0%; 78.7%]; G^2 = 99.5% #> #> Test of heterogeneity: #> Q d.f. p-value #> 5.95 5 0.3106 #> #> Test of small-study effects: #> Q-Q' d.f. p-value #> 1.88 1 0.1701 #> #> Test of residual heterogeneity beyond small-study effects: #> Q' d.f. p-value #> 4.07 4 0.3962 #> #> Details on adjustment method: #> - expectation (beta0) orb1 <- orbbound(res_drapery, k.suspect = 1:5) print(orb1, digits = 2) #> #> Sensitivity Analysis for Outcome Reporting Bias (ORB) #> #> Number of studies combined: k=6 #> Between-study variance: tau^2 = 0.0066 #> #> Fixed effect model #> #> k.suspect maxbias RR 95%-CI z p-value #> 0 1.00 0.89 [0.79; 1.01] -1.86 0.0624 #> 1 1.04 0.93 [0.83; 1.05] -1.16 0.2471 #> 2 1.07 0.96 [0.85; 1.08] -0.73 0.4672 #> 3 1.09 0.98 [0.86; 1.10] -0.40 0.6887 #> 4 1.11 0.99 [0.88; 1.12] -0.14 0.8915 #> 5 1.13 1.01 [0.89; 1.13] 0.09 0.9318 #> #> Random effects model #> #> k.suspect maxbias RR 95%-CI z p-value #> 0 1.00 0.87 [0.76; 1.00] -1.89 0.0586 #> 1 1.04 0.91 [0.79; 1.05] -1.29 0.1984 #> 2 1.07 0.94 [0.81; 1.08] -0.92 0.3590 #> 3 1.09 0.96 [0.83; 1.10] -0.64 0.5238 #> 4 1.11 0.97 [0.84; 1.12] -0.41 0.6810 #> 5 1.13 0.98 [0.85; 1.13] -0.22 0.8252 #> #> Details on meta-analytical method: #> - Mantel-Haenszel method #> - Restricted maximum-likelihood estimator for tau^2 forest(orb1, xlim = c(0.75, 1.5))
Edit: Dr. Mozaffarian has replied to some of these criticisms in the comments section of his paper on PLOS Medicine (mostly because I had pressured the PLOS editors to issue a correction), and unfortunately, I have found the responses to be very poor. The response pretty much sums up to, “to the best of our knowledge when we were critically appraising the literature, this study seemed like a randomized trial, so we labeled it as such, and it may be appropriate to label it ‘quasi-randomized’ because it seemed appropriate at the time.”
Unfortunately, I don’t think that’s how good science progresses at all, and in fact, this error has the potential to cause serious confusion in the future if it goes uncorrected.
His response, in full, is the following:
Dariush Mozaffarian, MD DrPH
It has been noted by some comments that, in our investigation “Effects on Coronary Heart Disease of Increasing Polyunsaturated Fat in Place of Saturated Fat: A Systematic Review and Meta-Analysis of Randomized Controlled Trials,”1 the cross-over intervention design for the two clusters in the Finnish Mental Hospital study (1959-1971) was not randomized. One hospital started with the control diet and the other with the intervention diet for 6 years, and then these diets were reversed for another 6 years. Thus, over the 12 year intervention, each hospital served as its own control, with each hospital receiving each diet in alternating order.
Because this trial was conceived and designed in the 1950s, before current standardized designs and reporting were widely accepted for randomized cross-over trials, we considered this equivalent to a cluster-randomized cross-over trial in our meta-analysis. While the method for determining which hospital started with which diet was not described, both hospitals received both interventions, in differing order over time.
As we noted in the Discussion of our manuscript:
“Many of the identified randomized trials in our meta-analysis had important design limitations (Table 1). For example, some trials provided all or most meals, increasing compliance but perhaps limiting generalizability to effects of dietary recommendations alone; whereas other trials relied only on dietary advice, increasing generalizability to dietary recommendations but likely underestimating efficacy due to noncompliance. Several of these trials were not double-blind, raising the possibility of differential classification of endpoints by the investigators that could overestimate benefits of the intervention. One trial used a cluster-randomization cross-over design that intervened on sites rather than individuals; and two trials used open enrollment that allowed participants to both drop-in and drop-out during the trial. The methods for estimating and reporting PUFA and SFA consumption in each trial varied, which could cause errors in our estimation of the quantitative benefit per %E replacement.”
It is reasonable that one could disagree with our description of “cluster-randomization,” and describe this as a “cluster-quasi-experimental intervention” instead. Given the nature of this trial and its time period of implementation, this was our best interpretation.
Due to the design limitations of several of the trials, we performed several secondary analyses excluding studies based on different design characteristics. Combining all trials, the pooled RR for CHD events was 0.81 (95% CI=0.70-0.95, p=0.008). As we reported in the manuscript: “Excluding the Finnish mental hospital trial (2 reports) that used a cluster-randomization design, the overall pooled RR was 0.87 (95% CI=0.76-1.00, p=0.05).” None of these subgroup analyses were significantly different from the main pooled result, as demonstrated by the 95% CIs in each subgroup analysis including the value of the main pooled RR estimate of 0.81.
As we concluded in our Discussion:
“Given these limitations of each individual trial, the quantitative pooled risk estimate should be interpreted with some caution. Nevertheless, this is the best current worldwide evidence from RCTs for effects on CHD events of replacing SFA with PUFA, and, as discussed above, the pooled risk estimate from this meta-analysis (10% lower risk per 5%E greater PUFA) is well within the range of estimated benefits from randomized controlled feeding trials of changes in lipid levels (9% lower risk per 5%E greater PUFA) and prospective observational studies of clinical CHD events (13% lower risk per 5%E greater PUFA). The consistency of the findings across these different lines of evidence provides substantial confidence in both the qualitative benefits and also a fairly narrow range of quantitative uncertainty.”
Since the publication of our meta-analysis in 2010, multiple additional studies have further supported cardiometabolic benefits of PUFA consumption. Notably, these studies suggest that benefits are largely related to increased PUFA consumption, rather than decreased SFA consmption per se. For example, a meta-analysis of prospective cohort studies demonstrated that total dietary PUFA is associated with lower risk of clinical events in cohort studies whether replacing total SFA or total carbohydrate.2 A meta-analysis of 102 randomized controlled feeding trials demonstrated that dietary PUFA produces multiple beneficial effects on glycemic control, including lowering of fasting glucose, HbA1C, and insulin resistance and improving pancreatic beta cell function as measured by gold-standard insulin secretion capacity.3 Of note, glycemic benefits are seen whether PUFA replaces carbohydrate, SFA, or even MUFA.3 And, a pooling of new, harmonized, individual-level analysis including 39,740 individuals from 20 prospective cohort studies across ten nations demonstrated that objective blood or tissue biomarkers of linoleic acid (the predominant dietary PUFA) are associated with 35% lower risk of diabetes (per interquintile range, RR=0.65, 95% CI=0.60–0.72, p < 0.0001).4
In sum, the overall evidence confirms cardiometabolic benefits of PUFA consumption, including based on evidence from controlled feeding studies of blood lipids, controlled feeding studies of glucose-insulin homeostasis, prospective cohort studies of estimated dietary PUFA and clinical outcomes, prospective cohort studies of objective PUFA biomarkers and clinical outcomes, and controlled clinical trials of PUFA consumption and clinical outcomes.
- Mozaffarian D, Micha R, Wallace S. Effects on coronary heart disease of increasing polyunsaturated fat in place of saturated fat: a systematic review and meta-analysis of randomized controlled trials. PLoS Med 2010;7(3):e1000252.
- Farvid MS, Ding M, Pan A, et al. Dietary linoleic acid and risk of coronary heart disease: a systematic review and meta-analysis of prospective cohort studies. Circulation 2014;130(18):1568-78. doi: 10.1161/circulationaha.114.010236 [published Online First: 2014/08/28]
- Imamura F, Micha R, Wu JH, et al. Effects of Saturated Fat, Polyunsaturated Fat, Monounsaturated Fat, and Carbohydrate on Glucose-Insulin Homeostasis: A Systematic Review and Meta-analysis of Randomised Controlled Feeding Trials. PLoS medicine 2016;13(7):e1002087. doi: 10.1371/journal.pmed.1002087 [published Online First: 2016/07/21]
- Wu JHY, Marklund M, Imamura F, et al. Omega-6 fatty acid biomarkers and incident type 2 diabetes: pooled analysis of individual-level data for 39 740 adults from 20 prospective cohort studies. Lancet Diabetes Endocrinol 2017;5(12):965-74. doi: 10.1016/S2213-8587(17)30307-8 [published Online First: 2017/10/17]
Competing interests declared: Dr. Mozaffarian reports research funding from the National Institutes of Health and the Gates Foundation; personal fees from GOED, DSM, Nutrition Impact, Pollock Communications, Bunge, Indigo Agriculture, Amarin, Acasti Pharma, Cleveland Clinic Foundation, and America’s Test Kitchen; scientific advisory board, Elysium Health (with stock options), Omada Health, and DayTwo; and chapter royalties from UpToDate; all outside the submitted work.
I hope that readers can decide for themselves whether this response is adequate.
The analyses were run on:
#> R version 4.0.4 (2021-02-15) #> Platform: x86_64-apple-darwin17.0 (64-bit) #> Running under: macOS Big Sur 10.16 #> #> Matrix products: default #> BLAS: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRblas.dylib #> LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib #> #> Random number generation: #> RNG: Mersenne-Twister #> Normal: Inversion #> Sample: Rejection #> #> locale: #>  en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 #> #> attached base packages: #>  stats graphics grDevices utils datasets methods base #> #> other attached packages: #>  metasens_0.6-0 meta_4.16-2 ggplot2_3.3.3 concurve_2.7.7 metafor_2.4-0 Matrix_1.3-2 kableExtra_1.3.4 #>  webshot_0.5.2 #> #> loaded via a namespace (and not attached): #>  nlme_3.1-152 httr_1.4.2 tools_4.0.4 backports_1.2.1 bslib_0.2.4 #>  utf8_1.1.4 R6_2.5.0 DBI_1.1.1 colorspace_2.0-0 withr_2.4.1 #>  gridExtra_2.3 tidyselect_1.1.0 processx_3.4.5 curl_4.3 compiler_4.0.4 #>  rvest_0.3.6 flextable_0.6.3 xml2_1.3.2 officer_0.3.16 bookdown_0.21 #>  sass_0.3.1 scales_1.1.1 survMisc_0.5.5 callr_3.5.1 askpass_1.1 #>  systemfonts_1.0.1 stringr_1.4.0 digest_0.6.27 minqa_1.2.4 foreign_0.8-81 #>  rmarkdown_2.7 svglite_126.96.36.199 rio_0.5.16 base64enc_0.1-3 pkgconfig_2.0.3 #>  htmltools_0.5.1.1 lme4_1.1-26 highr_0.8 readxl_1.3.1 rlang_0.4.10 #>  rstudioapi_0.13 farver_2.0.3 jquerylib_0.1.3 generics_0.1.0 zoo_1.8-8 #>  jsonlite_1.7.2 dplyr_1.0.4 zip_2.1.1 car_3.0-10 magrittr_2.0.1 #>  credentials_1.3.0 Rcpp_1.0.6 munsell_0.5.0 fansi_0.4.2 abind_1.4-5 #>  gdtools_0.2.3 lifecycle_1.0.0 stringi_1.5.3 yaml_2.2.1 CompQuadForm_1.4.3 #>  carData_3.0-4 MASS_7.3-53.1 debugme_1.1.0 grid_4.0.4 parallel_4.0.4 #>  forcats_0.5.1 crayon_1.4.1 survminer_0.4.8 lattice_0.20-41 haven_2.3.1 #>  splines_4.0.4 hms_1.0.0 sys_3.4 knitr_1.31 ps_1.5.0 #>  pillar_1.5.0 ProfileLikelihood_1.1 ggpubr_0.4.0 uuid_0.1-4 boot_1.3-27 #>  ggsignif_0.6.0 glue_1.4.2 evaluate_0.14 blogdown_1.1 data.table_1.13.6 #>  nloptr_188.8.131.52 vctrs_0.3.6 bcaboot_0.2-1 cellranger_1.1.0 gtable_0.3.0 #>  openssl_1.4.3 purrr_0.3.4 tidyr_1.1.2 km.ci_0.5-2 assertthat_0.2.1 #>  xfun_0.21 openxlsx_4.2.3 xtable_1.8-4 broom_0.7.5 rstatix_0.7.0 #>  survival_3.2-7 viridisLite_0.3.0 tibble_3.0.6 pbmcapply_1.5.0 KMsurv_0.1-5 #>  statmod_1.4.35 ellipsis_0.3.1