I am not making this up.
STAT News reports:
Children of women who ate little or no meat while pregnant are more likely to abuse alcohol, tobacco, and marijuana at age 15 than are children of mothers who did eat meat.
That’s the conclusion from a new study Meat Consumption During Pregnancy and Substance Misuse Among Adolescent Offspring: Stratification of TCN2 Genetic Variants published in Alcoholism Clinical and Experimental Research.
[pullquote align=”right” cite=”” link=”” color=”” class=”” size=””]Until a result is reproduced, it ought to be viewed as interesting, but speculative and unproven.[/pullquote]
How was the study performed? According to STAT News:
Researchers analyzed data from 5,109 women and their children in a long-running study in England called ALSPAC (the Avon Longitudinal Study of Parents and Children), which has gathered years of data on what women did while pregnant and their children’s health. The less meat the women ate while pregnant, the more their children’s risk of drinking, smoking, or using marijuana as 15-year olds, Dr. Joseph Hibbeln of the National Institute on Alcohol Abuse and Alcoholism of the National Institutes of Health and his colleagues reported on Wednesday in Alcoholism: Clinical and Experimental Research. (They were funded by the U.S. and U.K. governments and a charity, not meat producers.)
Roughly 10 percent of the 15-year-olds smoked at least weekly, drank enough to have behavioral problems, or used marijuana “moderately.” But teens of meatless moms were 75 percent more likely to have alcohol-related problems, 85 percent more likely to smoke, and 2.7 times as likely to use marijuana compared to teens of mothers who’d eaten meat while pregnant.
Children of pregnant vegetarians are more likely to abuse drugs and alcohol.
Ironic, isn’t it, that a diet thought by its practitioners to be healthier is actually harmful to developing babies?
Ironic … and almost certainly untrue.
This research represents a cautionary tale, not about the risks of vegetarianism, but about the risks of p-hacking, a practice beloved of some scientists, particularly those in the field of breastfeeding research.
P-hacking often occurs in the analysis of large data sets. It refers to the value “p” used to determine statistical significance. A difference between two groups is only meaningful if it is statistically significant, expressed as the chance that the findings are due to chance. For example, a p value less than 0.001 means that there is a less than a 0.1% chance that an observed finding is due to chance and a greater than 99% chance that it represents a real difference.
Researchers look for statistically significant differences between two groups. Then they announce them as “findings” without acknowledging that any large dataset looking at multiple outcomes is bound to have random statistically significant differences that are coincidental and don’t represent real outcomes. Indeed, by definition using a p value of less than 0.001 means that almost 0.1% of the differences that appears to be statistically significant are actually due to chance and don’t represent a real finding at all.
When looking at studies of a few variables, a p value of 0.001 means that a statistically significant results is almost certainly a real result. However, mining of large datasets may involve thousands of variables. For example, in mining a dataset of 10,000 possible variables, we would expect that 0.1% — 10 statistically significant results — are, by definition, actually due to chance, and therefore, not real.
How can we guard against p-hacking? The most important way is to recognize that it is always a possibility when analyzing large datasets; in other words, it is wrong to conclude that every statistically significant result in such an analysis is a real result.
In addition, there are a number of additional statistical tests that can give greater insight into whether a result is real or just a statistical artifact.
The ultimate insurance that a result is real and not an effect of p-hacking is a basic principle of all research: reproducibility. Do other data sets produce the same results? Unless and until the finding is reproduced, there is no reason to believe that the results are real.
Therefore, we should not be rushing to counsel pregnant women that vegetarianism leads to substance abuse among offspring. It almost certainly does not. The finding is most likely spurious, just an artifact of the statistical analysis.
This cautionary tale has implications far beyond this study, most especially in breastfeeding research. Many of the purported claims about the benefits of breastfeeding are also based on mining of large datasets. Such studies by definition will produce spurious statistically significant relationships. When such a “benefit” is discovered in breastfeeding research, it should be greeted with the exact same skepticism that ought to greet this study.
A good rule of thumb is this: Until a result is reproduced, it ought to be viewed as interesting, but speculative and unproven.